Introduction
Once the database is setup, I need to make sure the schemas I define in my Next.js project will be synced with my supabase service. And thats what Iβm gonna do in this article.
| Overview |
|---|
| First things first |
| The two Syncing approaches |
| [[π Sync Database#Push *(not recommended)|Push (not recommended)]] |
| [[π Sync Database#Migration (recommended)|Migration (recommended)]] |
| Update schema without losing existing data |
First things first
Before I can actually put my data into the database, I need to create tables and might even add some seed data to get some dummy data. Otherwise itβs gonna be a boring and totally empty database.
The two Syncing approaches
There are two different approaches on how do handle prisma table changes. Push is a bit more messy, dirty and offers less tracking over changes, but might be useful when just getting started with the project and itβs not clear yet how the tables will look like and you are not working in a team either. However itβs still recommended to instead choose migration, because it offers more traceability and a database history similar to a git history.
Push (not recommended)
The more chaotic and not documented approach that should be avoided in serious projects or in team work.
All I need to do to sync my supabase database with my local prisma schema, is to push it with this command.
npx prisma db pushAnd then after I pushed, I also need to generate a new prisma client, to make sure my generated types match with the prisma schema:
npx prisma generateMigration (recommended)
The better documented and clean approach that offers a change history. Definitely the recommended approach.
Whenever I want to migrate the local prisma schema to supabase, I can easily do it with these commands:
npx prisma migrate dev --name describe-what-i-didβname: The name i will give to the migration, just like a commit message. These messages will then appear as history like in this example here:
prisma/
migrations/
20240201120000_add-blog-model/
20240201121500_add-blog-status/
20240201123000_make-content-longer/So when working with migration, the workflow looks like this:
- βοΈ Edit schema.prisma
- βΆοΈ npx prisma migrate dev βname what-changed
- π Keep coding
And then after I migrated, I also need to generate a new prisma client, to make sure my generated types match with the prisma schema:
npx prisma generateUpdate schema without losing existing data
It might happen, that I will need to update the prisma schema after some real data got already stored in the database. In that situation, I need to be careful to make sure I wonβt erase all existing data. Simple example:
My schema should receive a new attribute X. Now I run into the problem that the schema requires this new attribute but my old data doesnβt have it yet. Therefore I wonβt be able to sync the database due to this conflict.
A work around is to assign a default value to the new attribute X, then sync the database, remove the default value again and sync once more.
With that I can make sure my old data receives a default value but any need data will need to assign an individual one.
Conclusion
Now I know that there are two different ways on how to sync my database. While push is more convenient because its easier and faster, I still will go with migration. I just like the clean and easy to understand approach that the history of migrations come with. And since for sure all professional production systems will go with migration instead of push, letβs just get used to it from the beginning.