NpgsqlRest NuGet Library

NpgsqlRest is a .NET8 NuGet library that can build a RESTful API from the existing PostgreSQL routines (functions or procedures) with a single extension on a web application builder.

You can think of it as PostgREST, but only for routines (functions or procedures), since PostgREST is doing a lot more. It is implemented as .NET8 middleware so you can create custom builds, or integrate with your .NET8 projects, and it is highly customizable, with some unique features.

The library is AOT-ready, which means it can utilize .NET AOT feature to create self-contained, native binary executables, but more on that later.

Motivation

This is built to be a crucial part of the future low-code data platform based on PostgreSQL. And, as such it was built to satisfy very specific requirements (as opposed to catering to a wider audience). For example, it has two dependencies 1) the latest Microsoft.NET.Sdk.Web that targets net8.0 framework. 2) the latest Npgsql ADO.NET Data Provider for PostgreSQL (current version 8.0.1). Those dependencies will be updated with each new release. If you need to work with older versions, you need to either 1) update them in your project - or - 2) clone the NpgsqlRest repository and make a custom build with downgraded dependencies.

Furthermore, future developments such as new features highly depend on the main project that will be using this library, but with the right sponsorship, anything is possible.

Features and Usage

Most basic usage in a standard .NET8 web application looks like this:

using NpgsqlRest;

var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();

app.UseNpgsqlRest(new()
{
    ConnectionString = "Host=localhost;Port=5432;Database=my_db;Username=postgres;Password=postgres"
    //
    // the rest of the options goes here
    //
});

app.Run();
using NpgsqlRest;

var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();

app.UseNpgsqlRest(new()
{
    ConnectionString = "Host=localhost;Port=5432;Database=my_db;Username=postgres;Password=postgres"
    //
    // the rest of the options goes here
    //
});

app.Run();

So, all you need to do is call UseNpgsqlRest extension for your web application builder and provide a valid Npgsql connection string for the PostgreSQL database. Of course, there are many available options that you can use to configure all endpoints are specific endpoints. The best way to explore available options is to check out the source code for the NpgsqlRestOptions class.

And now, if you happen to have some PostgreSQL functions, like for example this hello world functions:

create function hello_world()         
returns text 
language sql
as $$
select 'Hello World'
$$;
create function hello_world()         
returns text 
language sql
as $$
select 'Hello World'
$$;

You will have a new RESTful endpoint exposed in your .NET application.

This endpoint will be a POST, currently implemented rules are:

The endpoint is GET if the routine name either:

  • Starts with get_ (case insensitive).
  • Ends with _get (case insensitive).
  • Contains _get_ (case insensitive).

Otherwise, the endpoint is POST if the volatility option is VOLATILE (PostgreSQL default). These types of functions are assumed to be changing some data. Of course, all of that is highly configurable either through code options or through comment annotations which we will see later.

And now, in order to enable endpoint discovery and to facilitate testing and debugging, NpgsqlRest can build an HTTP file for testing. HTTP files are not enabled by default, so they have to be enabled in the builder options:

app.UseNpgsqlRest(new()
{
    ConnectionString = "Host=localhost;Port=5432;Database=my_db;Username=postgres;Password=postgres",
    HttpFileOptions = new() { FileOverwrite = true }
});
app.UseNpgsqlRest(new()
{
    ConnectionString = "Host=localhost;Port=5432;Database=my_db;Username=postgres;Password=postgres",
    HttpFileOptions = new() { FileOverwrite = true }
});

This will produce the following HTTP file, which we can use for further testing and debugging with Visual Studio or VS Code (with REST Clint extensions):

@host=http://localhost:5000        

// function public.hello_world()
// returns text
POST {{!host!}}/api/hello-world
@host=http://localhost:5000        

// function public.hello_world()
// returns text
POST {{!host!}}/api/hello-world
  • Note that !host! is, actually, just host, my template parser for this website is just screwing with me.

Anyway, you can notice that the HTTP file also contains a nicely formatted comment header with a basic function signature, to be clear about what exactly function will be executed. You can include even the entire function definition in this file if you want. Also, if you happen to have some parameters, the HTTP file will contain test parameters populated so that the endpoint is ready to be executed immediately.

And, when this endpoint is executed, the results are as expected:

HTTP/1.1 200 OK                       
Connection: close
Content-Type: text/plain
Date: Tue, 09 Jan 2024 14:25:26 GMT
Server: Kestrel
Transfer-Encoding: chunked

Hello World
HTTP/1.1 200 OK                       
Connection: close
Content-Type: text/plain
Date: Tue, 09 Jan 2024 14:25:26 GMT
Server: Kestrel
Transfer-Encoding: chunked

Hello World

That would be a basic usage (with HTTP file generation enabled). Other features may include (incomplete list):

  • Array parameters in query strings or JSON body and array type serialization in responses.
  • Entire body content to single parameter assignment.
  • Comment annotation endpoint configuration.
  • Default parameters support.
  • Schema control (different paths, HTTP files).
  • Parameter overloading.
  • Sending request headers as context, or as a parameter (or neither).
  • Returning set of primitive types.
  • Returning tables.
  • Returning set of existing tables (table-valued functions).
  • Returning set of record types (untyped functions).
  • Unnamed parameters support.
  • Variadic parameters support.
  • Custom naming conventions for URLs, request parameters or response JSON fields.
  • Custom validation events and custom parameter values.

Untyped functions have shown to be a very useful feature because underlying function doesn't require resulting table definition at all:

create function get_test_records() 
returns setof record -- see, no table defintion
language sql
as 
$$
select * from (values 
    (1, 'one'), 
    (2, 'two'), 
    (3, 'three')
) v;
$$;
create function get_test_records() 
returns setof record -- see, no table defintion
language sql
as 
$$
select * from (values 
    (1, 'one'), 
    (2, 'two'), 
    (3, 'three')
) v;
$$;

But, on the other hand, calling clients have to know which types to expect, because the resulting response will be a JSON array where every element is an array of raw PostgreSQL values encoded as JSON string.

Another interesting feature is the custom validation events. When, for example, you want to supply some custom value from your code to some parameters, The common example is sending Context.User.Identity.Name value as a function parameter:

app.UseNpgsqlRest(new(connectionString)
{
    ValidateParameters = p =>
    {
        //
        // If user is authenticated and parameter name is "user", always send Context.User.Identity.Name
        //
        if (p.Context.User.Identity?.IsAuthenticated == true && 
            string.Equals(p.ParamName, "user", StringComparison.OrdinalIgnoreCase))
        {
            p.Parameter.Value = p.Context.User.Identity.Name;
        }
    } 
});
app.UseNpgsqlRest(new(connectionString)
{
    ValidateParameters = p =>
    {
        //
        // If user is authenticated and parameter name is "user", always send Context.User.Identity.Name
        //
        if (p.Context.User.Identity?.IsAuthenticated == true && 
            string.Equals(p.ParamName, "user", StringComparison.OrdinalIgnoreCase))
        {
            p.Parameter.Value = p.Context.User.Identity.Name;
        }
    } 
});

There are also events when an endpoint is created and all endpoints are created which are very useful in the scenario where we want to generate some interfaces, like typescript interfaces for the frontend or even the entire fetch scripts, all required data is passed to these events.

For more usage examples for usage of these features, the best place is to check out the test project.

Now, there are to very unique features, that I'd like to discuss a little bit deeper:

  1. Comment annotations.
  2. AOT support.
Comment Annotations

The basic idea is that we can add text annotations to routine comments that can alter endpoint behavior and properties.

PostgreSQL supports adding custom comments to all objects since version, well, since forever. See the comment documentation. You can add any text to your functions or procedures:

COMMENT ON FUNCTION my_function (timestamp) IS 'Returns Roman Numeral';
COMMENT ON PROCEDURE my_proc (integer, integer) IS 'Runs a report';
COMMENT ON FUNCTION my_function (timestamp) IS 'Returns Roman Numeral';
COMMENT ON PROCEDURE my_proc (integer, integer) IS 'Runs a report';

In the example above function hello_world() generates HTTP endpoint POST http://localhost:5000/api/hello-world. We can change this to a GET endpoint with the following comment:

COMMENT ON FUNCTION hello_world () IS 'HTTP GET';
COMMENT ON FUNCTION hello_world () IS 'HTTP GET';

We can change the generated path to something else too:

COMMENT ON FUNCTION hello_world () IS 'HTTP GET /say-hello-world';
COMMENT ON FUNCTION hello_world () IS 'HTTP GET /say-hello-world';

Our endpoint is now GET http://localhost:5000/say-hello-world. This approach doesn't stop us from adding custom comments. If the comment line is not a recognizable annotation, it is simply ignored, since it is, well, just a comment. For example, the following comment has the same effect:

COMMENT ON FUNCTION hello_world () IS 'Simple hello world example.
HTTP GET /say-hello-world
Another comment line...';
COMMENT ON FUNCTION hello_world () IS 'Simple hello world example.
HTTP GET /say-hello-world
Another comment line...';

Valid comment annotation has to be in line that starts with the HTTP and the next two optional words are the method verb and the URL. All additional annotations are below this HTTP line. For example, if we add a line that contains a colon (:) character below this HTTP line - that would be interpreted as a new request header where the left side of the colon is a key and the right side is the header value:

COMMENT ON FUNCTION hello_world () IS 'HTTP GET /say-hello-world
Content-Type: text/html
COMMENT ON FUNCTION hello_world () IS 'HTTP GET /say-hello-world
Content-Type: text/html

This forces content returned from the GET /say-hello-world to have a header Content-Type: text/html.

Note that these comments don't have to be controlled through code like this. For some people, it is much easier to do through administrator GUI such as provided with tools like pgAdmin or DBeaver, just click on function properties and you can fill the comment box in your GUI tool. But you do have to do it with a database user with elevated permissions (superuser or admin).

You can control lots of other cool stuff like this:

  • Request parameters type (query string or JSON body). For example, param-type query line forces parameters in a query string.
  • Authorization control: Just adding requires-authorization or authorize will make the endpoint return not authorized if the user is not, well, authorized.
  • Command timeout: Add timeout <number> to set the command timeout to the number of seconds in the second parameter.
  • Request headers mode: Use request-headers <option> to set the mode to ignore, context (request headers are set to context) or parameter (request headers are set to parameter).
  • Request headers parameter name: Use request-headers-param-name <name> to set the name of the parameter where request headers are sent if the option request-headers parameter is set. The default is `header``, but you can change it here.
  • Body parameter name. You can bind an entire request body content to a single parameter with the body-param-name <name> annotation.

More info on this can be found in a default endpoint parser code DefaultEndpoint.cs and usage examples are also in a test project.

By default, all comments will be parsed for annotations. This behavior can be changed in a builder options configuration:

// ignore comment annotations
app.UseNpgsqlRest(new(connectionString) { CommentsMode = CommentsMode.Ignore });
// ignore comment annotations
app.UseNpgsqlRest(new(connectionString) { CommentsMode = CommentsMode.Ignore });

If we set this mode to OnlyWithHttpTag value:

app.UseNpgsqlRest(new(connectionString) { CommentsMode = CommentsMode.OnlyWithHttpTag });
app.UseNpgsqlRest(new(connectionString) { CommentsMode = CommentsMode.OnlyWithHttpTag });

Then, we can control is endpoint created at all. That means, that the comment must have at least an HTTP tag:

COMMENT ON FUNCTION hello_world () IS 'HTTP';
COMMENT ON FUNCTION hello_world () IS 'HTTP';

For the endpoint to be enabled at all. This way, we (or DB admin) can control the creation of the endpoint and choose what exactly functions and procedures are exposed as the HTTP endpoints.

AOT Support

This was a design goal from the beginning. The idea is to create a native executable for the target platform (usually Linux) and just simply copy it on a server and run it as any other program. There is no need for any runtime, the executable is self-contained and it has a very fast startup time.

This, of course, is optional, AOT builds are not for every environment, and some requirements need to be satisfied (like no reflection code). But if you do choose to have AOT build, this is the minimal configuration.

First, you'll need an <PublishAot>true</PublishAot> tag in the property group of your project file. It can look like this:

<Project Sdk="Microsoft.NET.Sdk.Web">

  <PropertyGroup>
    <TargetFramework>net8.0</TargetFramework>
    <Nullable>enable</Nullable>
    <ImplicitUsings>enable</ImplicitUsings>
    <InvariantGlobalization>true</InvariantGlobalization>
    <PublishAot>true</PublishAot>
    <NoDefaultLaunchSettingsFile>true</NoDefaultLaunchSettingsFile>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="NpgsqlRest" Version="1.3.0" />
  </ItemGroup>

</Project>
<Project Sdk="Microsoft.NET.Sdk.Web">

  <PropertyGroup>
    <TargetFramework>net8.0</TargetFramework>
    <Nullable>enable</Nullable>
    <ImplicitUsings>enable</ImplicitUsings>
    <InvariantGlobalization>true</InvariantGlobalization>
    <PublishAot>true</PublishAot>
    <NoDefaultLaunchSettingsFile>true</NoDefaultLaunchSettingsFile>
  </PropertyGroup>

  <ItemGroup>
    <PackageReference Include="NpgsqlRest" Version="1.3.0" />
  </ItemGroup>

</Project>

And then, the minimal build in the main program looks like this:

using NpgsqlRest;

var builder = WebApplication.CreateEmptyBuilder(new ());
builder.WebHost.UseKestrelCore();
var app = builder.Build();
app.UseNpgsqlRest(new("Host=127.0.0.1;Port=5432;Database=database_name;Username=postgres;Password=postgres"));
app.Run();
using NpgsqlRest;

var builder = WebApplication.CreateEmptyBuilder(new ());
builder.WebHost.UseKestrelCore();
var app = builder.Build();
app.UseNpgsqlRest(new("Host=127.0.0.1;Port=5432;Database=database_name;Username=postgres;Password=postgres"));
app.Run();

This is just the minimal build, naturally, you can add things like logging and configuration and customize endpoints to your needs.

The next step is to launch a build process with dotnet publish -r linux-x64 -c Release. This step has to be performed in a Linux environment since we are targeting a Linux build, and, at this point, the cross-OS native compilation is not supported yet. This will produce a runnable, self-contained executable that is large exactly 20041896 bytes or approximately 20MB.

There, you don't need a Docker.

Performances

I was excited to see some performance benchmarks since I already know how well is .NET8 optimized for performances. The only thing I could compare those benchmarks to is the performance of the same operations of the PostgREST instance.

First, some numbers:

  • NpgsqlRest AOT Linux executable is 20050120 bytes or approximately 20MB.
  • PostgREST version 12.0.2 Linux executable is 17398464 bytes or approximately 17.3MB.

As far as memory usage goes, it does seem that NpgsqlRest is a bit better, but, again it is far less feature-rich than PostgREST. Here is the output from the ps aux:

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
vbilopav  1613  605  0.5 274961008 91656 pts/5 Sl+  13:44   7:22 ./NpgsqlRestPerfTest
vbilopav   693  311  0.6 1073782668 102964 pts/0 Sl+ 13:41  10:28 ./postgrest postgrest.conf
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
vbilopav  1613  605  0.5 274961008 91656 pts/5 Sl+  13:44   7:22 ./NpgsqlRestPerfTest
vbilopav   693  311  0.6 1073782668 102964 pts/0 Sl+ 13:41  10:28 ./postgrest postgrest.conf

For a testing scenario, I've created a single PostgreSQL function that has 5 parameters of a different type and returns a 10-field table with the number of records from the first parameter:

create function perf_test(
    _records int,
    _text_param text,
    _int_param int,
    _ts_param timestamp,
    _bool_param bool
) 
returns table(
    id1 int, 
    foo1 text, 
    bar1 text, 
    datetime1 timestamp, 
    id2 int, 
    foo2 text, 
    bar2 text, 
    datetime2 timestamp,
    long_foo_bar text, 
    is_foobar bool
)
language sql
as
$$
select 
    i + _int_param as id1, 
    'foo' || '_' || _text_param || '_' || i::text as foo1, 
    'bar' || i::text as bar1, 
    (_ts_param::date) + (i::text || ' days')::interval as datetime1, 
    i+1 + _int_param as id2, 
    'foo' || '_' || _text_param || '_' || (i+1)::text as foo2, 
    'bar' || '_' || _text_param || '_' || (i+1)::text as bar2, 
    (_ts_param::date) + ((i+1)::text || ' days')::interval as datetime2,
    'long_foo_bar_' || '_' || _text_param || '_' || (i+2)::text as long_foo_bar, 
    (i % 2)::boolean and _bool_param as is_foobar
from 
    generate_series(1, _records) as i
$$;
create function perf_test(
    _records int,
    _text_param text,
    _int_param int,
    _ts_param timestamp,
    _bool_param bool
) 
returns table(
    id1 int, 
    foo1 text, 
    bar1 text, 
    datetime1 timestamp, 
    id2 int, 
    foo2 text, 
    bar2 text, 
    datetime2 timestamp,
    long_foo_bar text, 
    is_foobar bool
)
language sql
as
$$
select 
    i + _int_param as id1, 
    'foo' || '_' || _text_param || '_' || i::text as foo1, 
    'bar' || i::text as bar1, 
    (_ts_param::date) + (i::text || ' days')::interval as datetime1, 
    i+1 + _int_param as id2, 
    'foo' || '_' || _text_param || '_' || (i+1)::text as foo2, 
    'bar' || '_' || _text_param || '_' || (i+1)::text as bar2, 
    (_ts_param::date) + ((i+1)::text || ' days')::interval as datetime2,
    'long_foo_bar_' || '_' || _text_param || '_' || (i+2)::text as long_foo_bar, 
    (i % 2)::boolean and _bool_param as is_foobar
from 
    generate_series(1, _records) as i
$$;

The testing suite is the K6 load testing system from Grafana Labs and there are three different testing scenarios prepared:

  1. The 60-second performance tests with 100 simulated virtual users retrieving 1 record from the endpoint.
  2. The 60-second performance tests with 100 simulated virtual users retrieving 5 records from the endpoint.
  3. The 60-second performance tests with 100 simulated virtual users retrieving 50 records from the endpoint.
  4. The 60-second performance tests with 100 simulated virtual users retrieving 100 records from the endpoint.

Let's see the results for the first scenario.

Results
  • PostgREST:
vbilopav@DESKTOP-O3A6QK2:~/k6$ k6 run api-test.js

          /\      |‾‾| /‾‾/   /‾‾/
     /\  /  \     |  |/  /   /  /
    /  \/    \    |     (   /   ‾‾\
   /          \   |  |\  \ |  (‾)  |
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: api-test.js
     output: -

  scenarios: (100.00%) 1 scenario, 100 max VUs, 1m30s max duration (incl. graceful stop):
           * breaking: Up to 100 looping VUs for 1m0s over 1 stages (gracefulRampDown: 30s, gracefulStop: 30s)


     ✓ response code was 200

     checks.........................: 100.00% ✓ 60875       ✗ 0
     data_received..................: 26 MB   425 kB/s
     data_sent......................: 15 MB   246 kB/s
     http_req_blocked...............: avg=3.95µs  min=459ns    med=2.9µs   max=6.3ms   p(90)=4.66µs   p(95)=5.31µs
     http_req_connecting............: avg=441ns   min=0s       med=0s      max=1.82ms  p(90)=0s       p(95)=0s
   ✓ http_req_duration..............: avg=49.34ms min=569.91µs med=3.29ms  max=1.31s   p(90)=168.6ms  p(95)=220.51ms
       { expected_response:true }...: avg=49.34ms min=569.91µs med=3.29ms  max=1.31s   p(90)=168.6ms  p(95)=220.51ms
   ✓ http_req_failed................: 0.00%   ✓ 0           ✗ 60875
     http_req_receiving.............: avg=64.33µs min=6.74µs   med=48.28µs max=19.57ms p(90)=104.87µs p(95)=134.31µs
     http_req_sending...............: avg=20.88µs min=2.75µs   med=16.35µs max=19.35ms p(90)=26.44µs  p(95)=36.55µs
     http_req_tls_handshaking.......: avg=0s      min=0s       med=0s      max=0s      p(90)=0s       p(95)=0s
     http_req_waiting...............: avg=49.26ms min=540.39µs med=3.19ms  max=1.31s   p(90)=168.47ms p(95)=220.43ms
     http_reqs......................: 60875   1010.718619/s
     iteration_duration.............: avg=49.49ms min=621.83µs med=3.47ms  max=1.31s   p(90)=168.77ms p(95)=220.69ms
     iterations.....................: 60875   1010.718619/s
     vus............................: 99      min=2         max=99
     vus_max........................: 100     min=100       max=100


running (1m00.2s), 000/100 VUs, 60875 complete and 0 interrupted iterations
breaking ✓ [======================================] 000/100 VUs  1m0s
vbilopav@DESKTOP-O3A6QK2:~/k6$ k6 run api-test.js

          /\      |‾‾| /‾‾/   /‾‾/
     /\  /  \     |  |/  /   /  /
    /  \/    \    |     (   /   ‾‾\
   /          \   |  |\  \ |  (‾)  |
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: api-test.js
     output: -

  scenarios: (100.00%) 1 scenario, 100 max VUs, 1m30s max duration (incl. graceful stop):
           * breaking: Up to 100 looping VUs for 1m0s over 1 stages (gracefulRampDown: 30s, gracefulStop: 30s)


     ✓ response code was 200

     checks.........................: 100.00% ✓ 60875       ✗ 0
     data_received..................: 26 MB   425 kB/s
     data_sent......................: 15 MB   246 kB/s
     http_req_blocked...............: avg=3.95µs  min=459ns    med=2.9µs   max=6.3ms   p(90)=4.66µs   p(95)=5.31µs
     http_req_connecting............: avg=441ns   min=0s       med=0s      max=1.82ms  p(90)=0s       p(95)=0s
   ✓ http_req_duration..............: avg=49.34ms min=569.91µs med=3.29ms  max=1.31s   p(90)=168.6ms  p(95)=220.51ms
       { expected_response:true }...: avg=49.34ms min=569.91µs med=3.29ms  max=1.31s   p(90)=168.6ms  p(95)=220.51ms
   ✓ http_req_failed................: 0.00%   ✓ 0           ✗ 60875
     http_req_receiving.............: avg=64.33µs min=6.74µs   med=48.28µs max=19.57ms p(90)=104.87µs p(95)=134.31µs
     http_req_sending...............: avg=20.88µs min=2.75µs   med=16.35µs max=19.35ms p(90)=26.44µs  p(95)=36.55µs
     http_req_tls_handshaking.......: avg=0s      min=0s       med=0s      max=0s      p(90)=0s       p(95)=0s
     http_req_waiting...............: avg=49.26ms min=540.39µs med=3.19ms  max=1.31s   p(90)=168.47ms p(95)=220.43ms
     http_reqs......................: 60875   1010.718619/s
     iteration_duration.............: avg=49.49ms min=621.83µs med=3.47ms  max=1.31s   p(90)=168.77ms p(95)=220.69ms
     iterations.....................: 60875   1010.718619/s
     vus............................: 99      min=2         max=99
     vus_max........................: 100     min=100       max=100


running (1m00.2s), 000/100 VUs, 60875 complete and 0 interrupted iterations
breaking ✓ [======================================] 000/100 VUs  1m0s
  • NpgsqlRest:
vbilopav@DESKTOP-O3A6QK2:~/k6$ k6 run api-test.js

          /\      |‾‾| /‾‾/   /‾‾/
     /\  /  \     |  |/  /   /  /
    /  \/    \    |     (   /   ‾‾\
   /          \   |  |\  \ |  (‾)  |
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: api-test.js
     output: -

  scenarios: (100.00%) 1 scenario, 100 max VUs, 1m30s max duration (incl. graceful stop):
           * breaking: Up to 100 looping VUs for 1m0s over 1 stages (gracefulRampDown: 30s, gracefulStop: 30s)


     ✓ response code was 200

     checks.........................: 100.00% ✓ 280845      ✗ 0
     data_received..................: 150 MB  2.5 MB/s
     data_sent......................: 68 MB   1.1 MB/s
     http_req_blocked...............: avg=4.85µs   min=569ns    med=3.26µs  max=10.01ms  p(90)=4.96µs   p(95)=5.7µs
     http_req_connecting............: avg=248ns    min=0s       med=0s      max=6.03ms   p(90)=0s       p(95)=0s
   ✓ http_req_duration..............: avg=10.45ms  min=480.9µs  med=8.38ms  max=314.61ms p(90)=19.13ms  p(95)=25.41ms
       { expected_response:true }...: avg=10.45ms  min=480.9µs  med=8.38ms  max=314.61ms p(90)=19.13ms  p(95)=25.41ms
   ✓ http_req_failed................: 0.00%   ✓ 0           ✗ 280845
     http_req_receiving.............: avg=117.62µs min=11.2µs   med=49.11µs max=97.22ms  p(90)=116.38µs p(95)=227.97µs
     http_req_sending...............: avg=28.84µs  min=3.66µs   med=17.55µs max=41.27ms  p(90)=26.57µs  p(95)=40.01µs
     http_req_tls_handshaking.......: avg=0s       min=0s       med=0s      max=0s       p(90)=0s       p(95)=0s
     http_req_waiting...............: avg=10.3ms   min=439.3µs  med=8.26ms  max=314.55ms p(90)=18.95ms  p(95)=25.17ms
     http_reqs......................: 280845  4679.187115/s
     iteration_duration.............: avg=10.66ms  min=525.77µs med=8.58ms  max=314.74ms p(90)=19.4ms   p(95)=25.7ms
     iterations.....................: 280845  4679.187115/s
     vus............................: 99      min=2         max=99
     vus_max........................: 100     min=100       max=100


running (1m00.0s), 000/100 VUs, 280845 complete and 0 interrupted iterations
breaking ✓ [======================================] 000/100 VUs  1m0s
vbilopav@DESKTOP-O3A6QK2:~/k6$ k6 run api-test.js

          /\      |‾‾| /‾‾/   /‾‾/
     /\  /  \     |  |/  /   /  /
    /  \/    \    |     (   /   ‾‾\
   /          \   |  |\  \ |  (‾)  |
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: api-test.js
     output: -

  scenarios: (100.00%) 1 scenario, 100 max VUs, 1m30s max duration (incl. graceful stop):
           * breaking: Up to 100 looping VUs for 1m0s over 1 stages (gracefulRampDown: 30s, gracefulStop: 30s)


     ✓ response code was 200

     checks.........................: 100.00% ✓ 280845      ✗ 0
     data_received..................: 150 MB  2.5 MB/s
     data_sent......................: 68 MB   1.1 MB/s
     http_req_blocked...............: avg=4.85µs   min=569ns    med=3.26µs  max=10.01ms  p(90)=4.96µs   p(95)=5.7µs
     http_req_connecting............: avg=248ns    min=0s       med=0s      max=6.03ms   p(90)=0s       p(95)=0s
   ✓ http_req_duration..............: avg=10.45ms  min=480.9µs  med=8.38ms  max=314.61ms p(90)=19.13ms  p(95)=25.41ms
       { expected_response:true }...: avg=10.45ms  min=480.9µs  med=8.38ms  max=314.61ms p(90)=19.13ms  p(95)=25.41ms
   ✓ http_req_failed................: 0.00%   ✓ 0           ✗ 280845
     http_req_receiving.............: avg=117.62µs min=11.2µs   med=49.11µs max=97.22ms  p(90)=116.38µs p(95)=227.97µs
     http_req_sending...............: avg=28.84µs  min=3.66µs   med=17.55µs max=41.27ms  p(90)=26.57µs  p(95)=40.01µs
     http_req_tls_handshaking.......: avg=0s       min=0s       med=0s      max=0s       p(90)=0s       p(95)=0s
     http_req_waiting...............: avg=10.3ms   min=439.3µs  med=8.26ms  max=314.55ms p(90)=18.95ms  p(95)=25.17ms
     http_reqs......................: 280845  4679.187115/s
     iteration_duration.............: avg=10.66ms  min=525.77µs med=8.58ms  max=314.74ms p(90)=19.4ms   p(95)=25.7ms
     iterations.....................: 280845  4679.187115/s
     vus............................: 99      min=2         max=99
     vus_max........................: 100     min=100       max=100


running (1m00.0s), 000/100 VUs, 280845 complete and 0 interrupted iterations
breaking ✓ [======================================] 000/100 VUs  1m0s

In short, running 60 seconds for 100 simulated virtual users, PostgREST has processed 60.875 successful requests and NpgsqlRest has processed 280.845 successful requests! NpgsqlRest is 4,61 times faster and can handle that many more requests!

I knew that .NET8 is very well optimized but I didn't expect that much. Being able to process 280K requests, instead of 60K is very much significant and a money saver.

Next, I wanted to see how results would change if I started returning more than 1 record. When a PostgreSQL call returns a record, it needs to be converted from a raw format returned from the server into JSON which requires a fair amount of string manipulation. So in that case performance of the built-in web server would be less important. I ran performance tests again for 5, 50 and 100 returned records. Here is the table with condensed results:

RecordsNpgsqlRest requestsPostgREST requestsRatio
1280.84560.8754,61
5310.12356.1885,51
50164.20871.4302,29
100143.11057.2172,50

Note that you can see all test files and complete detailed results output in a performance test source code. All tests were conducted on my poor little laptop. If someone has the time and will, they can be repeated in some more isolated environment.

Now, we can see that the ratio is getting lower with a higher number of returned records. I'm not sure what exactly to make of it.

Yes, well, .NET8 is faster, that much is clear for now. Somewhere between 2,5 and 5,5 times approximately. It depends as developers like to say.

Roadmap

As I said before, NpgsqlRest will be a crucial part of the future data platform, and as such the active development will continue. However, I cannot guarantee that all new features (or even fixes) will be published and open-sourced. That might change if enough interest is generated and perhaps with the right sponsorship because I don't have much of it as it stands.

However, that is not the reason not to discuss and consider possible features:

Live Reload Feature

The idea behind this feature would be the ability to recreate endpoints without having to restart the application. You could simply turn it on or off (or change the URL or something else) - by using comment annotations and do the live refresh without restarting.

First of all, this would require some serious refactoring. And I'm not entirely convinced that is needed, since a lot of accompanying code is going to be generated on endpoint creation events.

Tables And Views Support

PostgREST supports endpoints from tables and views, why shouldn't the NpgsqlRest as well?

I don't know about this. I can expose a view for example as an endpoint, but could be dangerous because the view can return an insane amount of data that would crash your application. Same as tables.

It would be interesting to have the ability to enable CRUD endpoints for certain tables, possibly with conflict resolution and all. But that approach feels to me very much like a giant can of worms that I don't wish to open. A sort of HTTP ORM!? Yikes.

Besides, my design philosophy is to encapsulate everything into SQL functions and procedures. It isn't that hard to write a function that returns your table or your view, is it?

Other Service Types

Like gRPC services for example? Or SignalR socket streaming?

Certainly, but I cannot guarantee that it will be open-sourced, maybe it will, I don't know. Or even that I would make something like that. However, from this standpoint, it shouldn't be too hard (if not even trivial). Luckily this library is open source and you can try it yourself.


That is it.

The library is already a bit mature in version 1.4.0 and is being used actively on one of my projects. If you find it useful and interesting visit a GitHub repository and give it a star.

I would also like to hear in comments what other people think. Especially about those performance tests above, because I don't know what to say, that goes beyond my expertise. I'm just a guy who likes to write SQL. So, leave a comment below in a comment section and you can ask me anything.

MORE ON NpgsqlRest
To receive notifications about new posts and updates, consider subscribing to my LinkdIn page:
vb-software linkedin

You will receive notifications about new posts on your LinkedIn feed.
Comments