The Python API

Although it's possible to use the Starter Kit without any coding (such as the ready-to-go login flows and supabase+firebase integration), a little coding goes a long way here. Avoiding code will hurt your business in the long-run, because NoCode tools are fundamentally limiting in what they can do. This is why people love FlutterFlow; it's low code, meaning a slightly tougher learning curve at first, and then unlimited potential thereafter.

If there are steps or instructions in this section that you don't understand or don't know how to do right away, that's perfectly normal. Most of the time, you can just pop these docs into chatGPT or Claude and let it explain what you need to do, but it's also highly benefitial to be able to understand all the code. This is because the code is just the implementation of how you want your app to behave, which is why it's fundamental that you're in the driver's seat.

Why deploy an API? (optional read)

Traditionally, applications would have the frontend – that's what FlutterFlow provides – and the backend, an API application that listens for requests and performs logic and database queries. But the FlutterFlow ecosystem promotes backend-as-a-service (BaaS) tools like Firebase, and this has lead to the belief that a traditional backend is now completely unnecessary, which is really only true for the most basic and simplistic applications. Although it is possible to stitch a whole backend together with cloud functions, as your application grows this will easily get out of control.

As an example, let's say you have an EdTech app that takes a user's profile and creates a customized learning plan for them. You write a cloud function which takes the auth token, decodes it, queries Firestore for the user's profile and returns their preferences, then sends a request to OpenAI for processing. After that, it might perform some more processing with more database interaction before returning the results to the user.

Now let's say you want to do something similar but for a different app feature. Now you need to create all the token processing logic, the database connections, the API call logic, and everything else again from scratch in a new cloud function. In an application of medium complexity, you might end up with hundreds of these cloud functions, with all this code copy pasted in each one. If you then want to change something that applies to all, you'll need to do it a hundred times, and you'll probably make mistakes. With no automated tests to ensure that you didn't break anything, it becomes impossible to manage the codebase and bugs will happen faster than you can fix them.

The Starter Kit comes with an API backend, named ff-starterkit-python-api. Python recently took the title of most popular programming language, but the real reason I chose it is because the language is so simple to read and write, especially for beginners. The ff-starterkit-python-api API handles the following for you:

  • Authorization token decoding
  • Automated test suite
  • Supabase and Firebase setup
  • Push notification triggers (inc. badges, grouping, histories etc)
  • Media storage
  • Email triggers
  • Admin dashboard
  • Admin privilege assignment
  • Privacy policy, terms of service, and data deletion webpages
  • A framework for creating completely custom business logic

Another reason why I advocate for getting into the code even if you're not a software developer is that AI has allowed for some really advanced tooling in the space. You can download Visual Studio Code or one of its AI-based derivatives like Cursor or Windsurf and get a lot of the code written for you.

The AI tools are not great at architecture, but that's the point of the Starter Kit – the AI editor will interpret the code from the FlutterFlow Starter Kit, explain it to you, and add new code that fits with the context. This helps to ensure that the code the AI produces won't be total garbage, which is pretty much guaranteed if you blindly use AI to create an API from scratch.

Actions to take

You'll need a code editor.

You can use any code editor: Atom, Sublime Text, Vim, etc, but I recommend Visual Studio Code (a.k.a. vscode). Later, you can get into AI editors like Cursor or Windsurf, but these are based on vscode anyway, so it's good to start simple and leave the AI tooling until you're a little more familiar.

Once you've downloaded and unzipped a local copy of ff-starterkit-python-api, rename the directory to match your project name (optionally appending "api" to identify it as the API repository), and open it in your code editor.

You'll also need to install a recent version of Python (the Kit requires higher than Python 3.10 but not greater than 3.12) and also Poetry. Once you have Python and Poetry installed, open a terminal (vscode has a "terminal" option in the navbar), and run

poetry install

Next, we need to set the variables that are specific to your app. Many of these "environment variables" are secrets, so you'll move them into a special file called .env.local in the root directory of the ff-starterkit-python-api/ directory. Rename the .env.template file to .env.local. Then, open the file and add the values that apply to your app. Some of these values should be fairly obvious, the others will be explained as we go along.

Now we need to tell the API where to look to connect Firebase and Supabase. For Firebase, you'll need the service account. Although this can be done in a number of ways, we'll use the gcloud CLI tool. It's better to do this now because it needs to be set up anyway in order to deploy the API to the public web.

Go to this link to install gcloud. How you do this depends on your operating system. After it's installed, login by running each of these commands in the terminal. Swap out <your project id> with your project ID, which you can get from the Firebase console.

export PROJECT_ID=<your project id>

gcloud auth application-default login
gcloud config set project $PROJECT_ID
gcloud auth application-default set-quota-project $PROJECT_ID
gcloud auth application-default print-access-token

This should open a browser where you can login to Google (make sure this is the same Google account as for your Firebase account). The commands will also set the default credentials locally so that your API will be authenticated with Firebase automatically.

Next, go to your Supabase dashboard.

The best approach is to have two Supabase projects, one for production and one for development. I'll talk about production and development instances and deploys later, but for now, we'll install a third instance of Supabase – a local instance. You can read the full guide on running Supabase locally here, but the gist is that you install the Supabase CLI tool, along with Docker, and this will allow you to spin up the whole Supabase suite on your local machine.

To do this, run the follow commands one at a time:

supabase login
supabase init
supabase start
This will generate links to URLs on your local machine, along with credentials you can use. In the local supabase instance's dashboard, head to the SQL editor, and run the SQL from this page. This will get you up and running quickly, but for more advanced management of your supabase instances, check out this article.

Also, you'll now need to fill out the environment variable values in your .env.local file for the Anon public key (SUPABASE_ANON_KEY), the URL (SUPABASE_URL), the service role (SUPABASE_SECRET_KEY) and the JWT Secret (SUPABASE_JWT_SECRET), all of which you can grab from the output of supabase start or supabase status.

Now, in the debug panel of the editor, you may run the Python app. The debug panel is the icon in vscode with a play button and image of a bug. There's a dropdown at the top left for FastAPI, Pytest, and Current File. Click the play button for FastAPI.

If all went well, it'll give you a URL that you may visit in your browser. FastAPI (the Python web framework that the Kit employs), will provide you with automatic interactive documentation if you visit http://127.0.0.1:8080/docs.

It should now also be possible to run the unit test suite. In the Debug panel, click the play button for Pytest. The existing tests are basic and just check the RLS policies by running CRUD operations on the notifications table, but this should service as a starting point for writing your own tests.

I've written a Python library called PyFlutterFlow that you can use to interact with the API. PyFlutterFlow is nothing fancy, just a collection of endpoints and helper functions that take take care of some of the heavy lifting.

PyFlutterFlow gets installed automatically when you run poetry install, so the included API routes (like notifications endpoints, image upload endpoints, etc) will show up here. PyFlutterflow is also responsible for the admin dashboard, which should already be available if you visit http://127.0.0.1:8080/dashboard.

At this point it's good to note that you are not obliged to write any code of your own in Python, although doing so does unlock huge amounts of potential. In fact, if you can get the API deployed and little more, it'll still provide you with

  • The admin dashboard UI
  • Terms of Service and privacy policy webpages to meet the App Store requirements
  • Terms of Service and privacy policy in-app links
  • Custom email tooling
  • Standard notification API endpoints
  • A data deletion form and support URL for App Store requirements
  • Image upload support

and a few other handy features.

Deploying the API: Actions to take

Before writing any code of your own in Python, you should deploy the API. This helps to limit the number of possible issues with the deploy, since you'll know immediately it's a problem with your configuration and setup and not with your own code.

You'll need the project ID, which you can get from the Firebase console. Choose a GCP region, whichever is likely closest to your users. And choose a service name, which is often the same as whatever you have named this repository (e.g. myproject-api).

And finally, you'll need an environment file. This file is going to be similar to .env.local but gcloud requires a slightly different format, known as YAML. Create a file named .env.dev.yaml and another named env.prod.yaml.

It's up to you how rigidly you'd like to separate your environments, but simply put, you need .env.local for running the API locally, .env.dev.yaml for deploying the dev enviroment such that you can use it for developing alongside Flutterflow, and env.prod.yaml for your production environment. This means you'll do the following steps at least twice, one with .env.dev.yaml and one with env.prod.yaml.

To see the format of the YAML file, take a look at the included .env.dev.template.yaml. You can rename this to .env.dev.yaml for the dev deploy, and deploy this first. Also be aware that environment variable values must be strings, so although a boolean value of true is correct in .env.local, it must be written as "true" in YAML, and any empty values must be removed.

These are the commands to run. You can run them all at once if you wish, but it's better to do it one at a time at least the first time, to identify any issues. Be sure to fill in appropriate values of PROJECT_ID, REGION, SERVICE_NAME, and ENV_FILE.


export PROJECT_ID=<your project id>
export REGION=<GCP region, e.g. us-central1>
export SERVICE_NAME=<give the deployment a name>
export ENV_FILE=.env.dev.yaml

# Set the current project
gcloud config set project $PROJECT_ID
gcloud auth application-default set-quota-project $PROJECT_ID

# Get the project number
export PROJECT_NUMBER=`gcloud projects describe $(gcloud config get-value project) --format="value(projectNumber)"`

# Run the first deploy with the env variables. Subsequent deploys can be done via Github Actions (if set up)
gcloud run deploy $SERVICE_NAME --source . --region=$REGION --platform=managed --allow-unauthenticated --service-account="$PROJECT_NUMBER[email protected]" --env-vars-file $ENV_FILE

In case of any issues, an LLM like ChatGPT can usually get you through them, just paste in any errors. When the deploy succeeds, the output will provide you with a URL that you may visit in your browser. At this point, you may use routes like /docs or /dashboard and also use the API URL in FlutterFlow API calls.