Persistence

SonicJs uses 3 layers of data persistence data:

  1. Cache (In-Memory)
  2. KV (Key-Value pair store)
  3. D1 (A SQLite SQL database)

Why 3 Datastores?

SonicJs is all about low latency data delivery. If the client is requesting a single record and its not already cached in that region, then we use KV to retrieve it, since its the fastest way to deliver the content to the client.

However, if the client is requests is anything other than a single record, for example:

  1. Latest 10 blog posts
  2. Blog post with related user data
  3. Blog posts sorted and filtered

In these cases we use D1 (CloudFlare's SQLite product)

Caching (In Memory & KV Cache)

Caching is SonicJs' secret sauce when it comes to low latency performance...

SonicJs uses LokiJs to store previously requested endpoint results in an in-memory cache. The provides results to the client in the fastest possible way, even faster than using Cloudflare KV.

Keep in mind that Cloudflare has over 200 server nodes worldwide, so this can return results ridiculously fast - almost comparable to running your app locally against a local database!

For examples, a user requests the last 5 blog posts via the url:

http://localhost:8788/v1/posts?limit=5&offset=2&sortBy=created_on&sortDirection=desc

Here is the sequence of events that SonicJs uses to determine where to retrieve the data from:

  1. Is the requested data already in the in-memory cache?
    1. If yes, return the data from the cache (this is the fastest way to return the results). This is location dependent, so it will only return results when someone else in the same region has previously made the same request.
    2. If no, processed to step #2.
  2. Are the results already in the KV-based cache?
    1. If yes, return the data from the KV cache (this is the second fastest way to return the results). This is location dependent yet the cache will be valid if any user anywhere in the world has made the same request due to 3.2 below!
    2. If no, processed to step #3.
  3. SonicJs will return the data from Cloudflare D1 database (a SQLite databases).
    1. SonicJs will then cache the data in the in-memory cache using the full url as the cache key. This cache will be confined to the current node, so it will only be called upon by requests coming from the same region.
    2. SonicJs will then cache the data in the KV cache using the full url as the cache key. The cache propagates to all of Cloudflare's 200+ server nodes, so it can be utilized by users worldwide. And that's just freak'n awesome!

Accessing D1 SQL Data

If you're just getting started with SonicJs, you don't need to worry much about the caching as it works out of the box. All of your data modeling activities will be focused around creating tables and their relationships using Cloudflare's D1 data, a SQLite based cloud database.

SonicJs uses Drizzle, a popular open source ORM that lets you write plain SQL or use the API if you prefer. Here are a couple of examples:

API:

const allUsers = await db.select().from(users);

Plain SQL:

  const { results } = await db.prepare("select * from users").all();

Schema

SonicJs uses Drizzle for schema management. You can find the full docs here.

The SonicJs schemas can be found here: /src/db/schema/

Each table in your database will have one correspending schema file here you define the name, route, fields, relationships with other tables, and access control (which role can read/update/create/delete).

This simple blog schema that is part of the main branch is essentially a boilerplate/example. It can be fully altered to suite your needs.

Here is an example of the schema definition for a posts table along with the relationship between them:

/src/db/schema/posts

import { sqliteTable, index, text } from 'drizzle-orm/sqlite-core';

import { relations } from 'drizzle-orm';
import { auditSchema } from './audit';
import * as users from './users';
import * as categoriesToPosts from './categoriesToPosts';
import * as comments from './comments';
import { ApiConfig } from '../routes';
import { isAdmin, isAdminOrEditor } from '../config-helpers';

export const tableName = 'posts';

export const route = 'posts';

export const definition = {
  id: text('id').primaryKey(),
  title: text('title'),
  body: text('body'),
  userId: text('userId'),
  image: text('image'),
  images: text('images', { mode: 'json' }).$type<string[]>(),
  tags: text('tags', { mode: 'json' }).$type<string[]>()
};

export const table = sqliteTable(
  tableName,
  {
    ...definition,
    ...auditSchema
  },
  (table) => {
    return {
      userIdIndex: index('postUserIdIndex').on(table.userId)
    };
  }
);

export const relation = relations(table, ({ one, many }) => ({
  user: one(users.table, {
    fields: [table.userId],
    references: [users.table.id]
  }),
  categories: many(categoriesToPosts.table),
  comments: many(comments.table)
}));

export const access: ApiConfig['access'] = {
  operation: {
    read: true,
    create: isAdminOrEditor
  },
  filter: {
    // if a user tries to update a post and isn't the user that created the post the update won't happen
    update: (ctx) => {
      if (isAdmin(ctx)) {
        return true;
      } else {
        const user = ctx.get('user');
        if (user?.userId) {
          // Return filter so update doesn't happen if userId doesn't match
          return {
            userId: user.userId
          };
        } else {
          return false;
        }
      }
    },
    delete: (ctx) => {
      if (isAdmin(ctx)) {
        return true;
      } else {
        const user = ctx.get('user');
        if (user?.userId) {
          // Return filter so update doesn't happen if userId doesn't match
          return {
            userId: user.userId
          };
        } else {
          return false;
        }
      }
    }
  },
  fields: {
    userId: {
      update: false
    }
  }
};

export const hooks: ApiConfig['hooks'] = {
  resolveInput: {
    create: (ctx, data) => {
      if (ctx.get('user')?.userId) {
        data.userId = ctx.get('user').userId;
      }
      return data;
    },
    update: (ctx, id, data) => {
      if (ctx.get('user')?.userId) {
        data.userId = ctx.get('user').userId;
      }
      return data;
    }
  }
};

export const fields: ApiConfig['fields'] = {
  image: {
    type: 'file',
    bucket: (ctx) => ctx.env.R2STORAGE,
    path: 'images'
  },
  images: {
    type: 'file[]',
    bucket: (ctx) => ctx.env.R2STORAGE,
    path: 'images'
  },
  tags: {
    type: 'string[]'
  }
};

Note that we're using a standard set of audit fields (auditSchema) for our tables so that we don't have to copy/paste the fields in the auditSchema object for every table. This is completely optional.

Adding a New Table

If you want to add a new table, it is advised that you copy from one of the existing schema file. You'll will add your new file into the [root]/src/db/schema/ folder.

Modify the file as necessary.

You will need to modify the [root]/src/db/routes.ts file and add your new table configuration like so:

/src/db/routes.ts

...
export const tableSchemas = {
  users,
  posts,
  comments,
  categories,
  categoriesToPosts,
  userKeys,
  userSessions,
  myNewTable <-- add your new table here
};

In order for SonicJs to see your new table, you need to create and apply and new database migration - see below...

Migrations

SonicJs also uses Drizzles's built in support for database migrations and we've set it up to work out of the box with Cloudflare D1 (SQLite).

After making a change to any of your schema definition files (/src/db/schema/*.ts) you need to run the following command to generate the migration file:

npm run generate

This will create a uniquely named migration file in the ./migrations folder. An example migration file will look something like this:

CREATE TABLE `categories` (
	`id` text PRIMARY KEY NOT NULL,
	`title` text,
	`body` text,
	`created_on` integer,
	`updated_on` integer
);
--> statement-breakpoint
CREATE TABLE `comments` (
	`id` text PRIMARY KEY NOT NULL,
	`body` text,
	`user_id` text,
	`post_id` integer,
	`created_on` integer,
	`updated_on` integer
);
...

Its a good idea to manually review the migration file to make sure it is making the intended updates to your database before running it against your database.

To run the migration against your local development database:

npm run up

This will update your local D1 (SQLite) database.

And when you are ready to update your production database (in the Cloudflare Cloud):

npm run up:prod

Now that you've got a basic schema setup, your're ready to head over to the admin section and add some demo data. 🙌