Enhancing Your FlutterFlow App with a Solid Supabase Setup

So you’ve got a app that relies on Supabase.
This article is written with FlutterFlow in mind, but mostly it’ll apply to any Supabase-powered app.
Time for a little candor. Answer the following questions truthfully:
If your answer to any or all of these questions was “yes”, this article is for you. And if you get through it all, your apps will be more secure, better tested, easier to modify and develop, and less prone to losing all your users’ data.
A lot of this information comes from the Supabase docs here, but this article is aimed at FlutterFlow users and I’m going to try to make it all a little more accessible.
If you get through the advice in Level 1, good for you! You’re now doing a better job than most FlutterFlow users.
If you get through Level 2, you’re on the road to a robust and clean architecture that’s scalable and solid.
If you figure out Level 3, you’re unstoppable 💪
Let’s begin.
Level 1: The bare minimum
You need to separate your production data from your development data. Read that again if you have to.
That’s software engineering 101, and if you don’t do it, sooner or later you’ll corrupt your users’ data. This isn’t something you can crack out the day before go-live either. It may take a little time to learn, but this is not a “nice-to-have-lets-do-it-when-we-can” task. It’s a big-boy-pants task.
Since we rely on tools like FlutterFlow’s preview mode in development, which connects to a database in the cloud, we need two cloud instances of Supabase. For instance, if your app is called “Hot Singles Mingle”, you’ll have two databases called: “Hot Singles Mingle” (or “Hot Singles Mingle Prod”) and “Hot Singles Mingle DEV”. It’s similarly important to have two Firebase projects that correspond to these, but let’s focus on Supabase for now. Use FlutterFlow’s “Dev environments” feature to seamlessly switch between Prod and Dev.
Okay, congratulations, your position is defensible. You have two databases, one with all your production and user data safe, the other a sock-drawer of test data that you can burn at will. But what if you’re changing the structure of tables in dev, adding columns, setting RLS policies, etc? How are you supposed to keep the two schemas synchronized? And how do you play around with the new schema and try things out before committing to them?
Well, that’s why we have:
Level 2: Installing Supabase locally
Here’s where the fun begins. You can actually download the whole suite that Supabase offers and run it locally on your machine, allowing you to connect to the various elements: the dashboard, the API, storage, etc. easily.
For this, you need the Supabase CLI. CLI stands for “command line interface”, so it’s time to fire up the terminal 💻. You’ll need to learn a few basic terminal commands, like cd
and ls
, by which I mean ask ChatGPT to tell you what to type.
But first, head to the Supabase docs and follow the install instructions for your operating system.
Typing the command
supabase --version
will indicate whether you’ve installed the CLI successfully.
First, login to Supabase.
supabase login
Now, if you already have a folder of some kind that houses your code (a github repo, a local folder with some cloud functions, etc) it’s not a bad idea to bundle all this together. In my case, I usually run a Python API to support my FlutterFlow project, so the Supabase stuff will go in there.
If you don’t have a folder, you can just create a blank folder and start there.
cd
into the folder, and run
supabase init
This will perform some setup magic and create a supabase/
folder with some configuration details.
Next:
supabase start
Behind the scenes, a tool called docker will set up Supabase locally for you while you take a sip of your double-tall frappuccino mocha latte. You’ll need to install docker if you don’t have it. When it’s done, the command will spit out all the credentials and connection details you need. If you ever lose these, supabase status
will show them to you again.
You’ll notice that the URLs will have the format http://127.0.0.1/54321
. 127.0.0.1
is the IP address of your laptop or machine. It’s not using the internet at all, in fact you can do all this stuff offline, meaning it’s perfectly secure. This also explains why your super-secret "JWT secret" has a value of “super-secret-jwt-token-with-at-least-32-characters-long”. So don’t put sensitive data in this database, nor anything you want to keep – the whole point of all this is that this database is TOTALLY EXPENDABLE.
Now that the database is running, visit the Studio URL. This will probably be http://127.0.0.1:54323, but it’s in the output of supabase start
anyway. The entire supabase dashboard will be there and working. You can create tables, mess around, do whatever you like. In fact, do add some tables. So I can show you how to wipe it all again:
supabase db reset
This one will wipe your database. Now, although it isn’t hugely obvious why burning your whole database immediately is useful yet, bear with me.
What we’ll do is link our dev database in the cloud to this setup, and pull the data down. You’ll need your database password (Project Settings > Database in your supabase Dev instance in the cloud) and your project Id.
supabase link --project-ref <PROJECT ID>
Now we’ll sync up the cloud database with the local database, by pulling the cloud instance down:
supabase db pull
If you find your “migration history” doesn’t match, Supabase might offer to repair it, so just follow the suggestions it gives you. The idea is to sync up the previous changes (migrations) so that your local folder and your remote database have the same history.
Next, we’ll apply the migrations that were created by the pull command.
To apply the migrations, use
supabase migration up
Alternatively – and the idea is to be able to do this liberally – you can reset the database:
supabase db reset
This command will run the migrations and also use any seed data it finds in seed.sql. To create a seed file, we can dump some data from the remote database:
supabase db dump -f supabase/seed.sql --data-only
Now, you can reset the database all you want, pull new changes from the remote repo, play around with RLS policies, add and destroy data, anything you want. It’s a sandbox to try things out.
However, when you've worked locally a little and are confident that you want to apply your changes to the cloud database, what then?
Level 3: Schema Migrations
Since you now have a synced up migrations history with your remote dev database, you can make changes to the locally running dashboard – add columns, create RLS policies, etc. Since these changes exist on top of the last migration that was applied, the Supabase CLI is capable of keeping track of exactly what you’ve done.
You can capture all the changes you made via the local dashboard with the diff command:
supabase db diff -f <name of migration file>
You’ll need to name your migration file above. Give it a descriptive name, for example in our “Hot Singles Mingle” app, we might have a migration called:
supabase db diff -f add-mingle-rating-column-to-singles-table
Now, you can apply the migration remotely and bring your cloud instance’s database up to speed with your latest changes:
supabase db push
Supabase will confirm that you want to apply your new migration. Remember, the push action will change your remote database, so do it with care and be sure that the migration is solid. It’s a good idea to check the migration file that was produced before doing the push.
Applying the migrations to the production database
We’ve now synced up our local and development databases, but that’s not much good if we cannot now apply all this to production.
The next step will depend on the state of your production database. If the database is brand new, you may want to apply any roles you need to apply first.
Then run
supabase db push
This will apply the migrations you have locally and your schema will be set up.Note that no data is pushed to production, because obviously, the whole point of all of this is that the schema matches perfectly, but the data is different. When you apply the new migrations to prod, the schema changes, but the data is not affected. Of course, some schema changes may not work because the existing data does not allow for it. That’s the reason we do these things in the dev database first.
So, once everything is synced up, the workflow moving forward is:
- Make schema changes locally
- Create a migration file locally
- Push this migration to the dev database
- Test on dev
- Unlink from dev and link to prod
- Push the migration to prod
- Test
Bonus level: Hard-core data migrations
Schema migrations change the structure of your database and should be handled by migration files. However, sometimes you have data in one database that you’ll like to move to another database. These one-off tasks are called “data migrations”.
For smaller datasets, you can fit all the data into a .sql text file.
supabase db dump -f data.sql --data-only
It’s actually simple to apply this to any database, remotely or locally, by pasting the contents of data.sql into the SQL editor.
For larger datasets, this won’t be possible as the SQL editor has a limit as to how much data it can handle. For this, we’ll use the PostgreSQL client tools.
First we’ll dump the data we need. If the database is a remote one, you can grab the connection string from the Supabase dashboard. Note that you may need the connection string that’s good for IP4, which is the second one down. If you have data locally, the connection string is shown when you run supabase status
or supabase start
.
Once you have the connection string, with the password included in it, here’s the command to use
pg_dump -Fc -n public -f data.dump <CONNECTION_STRING>
This will dump just the public schema (remove -n public
if you want all the schemas) into a file called data.dump. The postgres .dump format is more compact and better for large datasets than the .sql format.
Before restoring this data into another database, it’s a good idea to create a restore list. This governs the order that the data is restored. If you have foreign keys, you’ll need to restore the base table first and then the table that relies on it afterward. For example if you have a table with a user_id
, you need to restore the table first, or the restore will fail and may even partially restore the data, which is not good. So run this one:
pg_restore -l data.dump > restore.list
Now, open this file and look at the order in which the tables are set. Edit the file to rearrange this order if necessary.
And now, using this restore file, we’ll do the restore. Grab the connection string of the target database, and run
pg_restore --data-only --single-transaction -L restore.list -d <CONNECTION STRING> data.dump
The --single-transaction
is really useful here, as it will roll back any changes if an error occurs, ensuring data consistency.
If you only wish to dump and restore particular tables, you can also do that. For example, to only restore data from the “users” table:
pg_restore --data-only --single-transaction -L restore.list -d <CONNECTION STRING> -t users data.dump
And that's the lot, you're now a Supabase pro 💪