- 🚀 Fastest full-featured node & deno client
- 🏷 ES6 Tagged Template Strings at the core
- 🏄♀️ Simple surface API
- 🖊️ Dynamic query support
- 💬 Chat and help on Gitter
- 🐦 Follow on Twitter
Getting started
Usage
Create your sql
database instance
// db.js
import postgres from 'https://deno.land/x/postgresjs/mod.js'
const sql = postgres({ /* options */ }) // will use psql environment variables
export default sql
Simply import for use elsewhere
// users.js
import sql from './db.js'
async function getUsersOver(age) {
const users = await sql`
select
name,
age
from users
where age > ${ age }
`
// users = Result [{ name: "Walter", age: 80 }, { name: 'Murray', age: 68 }, ...]
return users
}
async function insertUser({ name, age }) {
const users = await sql`
insert into users
(name, age)
values
(${ name }, ${ age })
returning name, age
`
// users = Result [{ name: "Murray", age: 68 }]
return users
}
ESM dynamic imports
The library can be used with ESM dynamic imports as well as shown here.
const { default: postgres } = await import('postgres')
Table of Contents
- Connection
- Queries
- Building queries
- Advanced query methods
- Transactions
- Data Transformation
- Listen & notify
- Realtime subscribe
- Numbers, bigint, numeric
- Result Array
- Connection details
- Custom Types
- Teardown / Cleanup
- Error handling
- TypeScript support
- Reserving connections
- Changelog
Connection
postgres([url], [options])
You can use either a postgres://
url connection string or the options to define your database connection properties. Options in the object will override any present in the url. Options will fall back to the same environment variables as psql.
const sql = postgres('postgres://username:password@host:port/database', {
host : '', // Postgres ip address[s] or domain name[s]
port : 5432, // Postgres server port[s]
database : '', // Name of database to connect to
username : '', // Username of database user
password : '', // Password of database user
...and more
})
More options can be found in the Connection details section.
Queries
await sql`...` -> Result[]
Postgres.js utilizes Tagged template functions to process query parameters before interpolation. Using tagged template literals benefits developers by:
- Enforcing safe query generation
- Giving the
sql``
function powerful utility and query building features.
Any generic value will be serialized according to an inferred type, and replaced by a PostgreSQL protocol placeholder $1, $2, ...
. The parameters are then sent separately to the database which handles escaping & casting.
All queries will return a Result
array, with objects mapping column names to each row.
const xs = await sql`
insert into users (
name, age
) values (
'Murray', 68
)
returning *
`
// xs = [{ user_id: 1, name: 'Murray', age: 68 }]
Please note that queries are first executed when
awaited
– or instantly by using.execute()
.
Query parameters
Parameters are automatically extracted and handled by the database so that SQL injection isn’t possible. No special handling is necessary, simply use tagged template literals as usual.
const name = 'Mur'
, age = 60
const users = await sql`
select
name,
age
from users
where
name like ${ name + '%' }
and age > ${ age }
`
// users = [{ name: 'Murray', age: 68 }]
Be careful with quotation marks here. Because Postgres infers column types, you do not need to wrap your interpolated parameters in quotes like
'${name}'
. This will cause an error because the tagged template replaces${name}
with$1
in the query string, leaving Postgres to do the interpolation. If you wrap that in a string, Postgres will see'$1'
and interpret it as a string as opposed to a parameter.
Dynamic column selection
const columns = ['name', 'age']
await sql`
select
${ sql(columns) }
from users
`
// Which results in:
select "name", "age" from users
Dynamic inserts
const user = {
name: 'Murray',
age: 68
}
await sql`
insert into users ${
sql(user, 'name', 'age')
}
`
// Which results in:
insert into users ("name", "age") values ($1, $2)
// The columns can also be given with an array
const columns = ['name', 'age']
await sql`
insert into users ${
sql(user, columns)
}
`
You can omit column names and simply execute sql(user)
to get all the fields from the object as columns. Be careful not to allow users to supply columns that you do not want to be inserted.
Multiple inserts in one query
If you need to insert multiple rows at the same time it’s also much faster to do it with a single insert
. Simply pass an array of objects to sql()
.
const users = [{
name: 'Murray',
age: 68,
garbage: 'ignore'
},
{
name: 'Walter',
age: 80
}]
await sql`insert into users ${ sql(users, 'name', 'age') }`
// Is translated to:
insert into users ("name", "age") values ($1, <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mn>2</mn><mo stretchy="false">)</mo><mo separator="true">,</mo><mo stretchy="false">(</mo></mrow><annotation encoding="application/x-tex">2), (</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:1em;vertical-align:-0.25em;"></span><span class="mord">2</span><span class="mclose">)</span><span class="mpunct">,</span><span class="mspace" style="margin-right:0.1667em;"></span><span class="mopen">(</span></span></span></span>3, $4)
// Here you can also omit column names which will use object keys as columns
await sql`insert into users ${ sql(users) }`
// Which results in:
insert into users ("name", "age") values ($1, <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mn>2</mn><mo stretchy="false">)</mo><mo separator="true">,</mo><mo stretchy="false">(</mo></mrow><annotation encoding="application/x-tex">2), (</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:1em;vertical-align:-0.25em;"></span><span class="mord">2</span><span class="mclose">)</span><span class="mpunct">,</span><span class="mspace" style="margin-right:0.1667em;"></span><span class="mopen">(</span></span></span></span>3, $4)
Dynamic columns in updates
This is also useful for update queries
const user = {
id: 1,
name: 'Murray',
age: 68
}
await sql`
update users set ${
sql(user, 'name', 'age')
}
where user_id = ${ user.id }
`
// Which results in:
update users set "name" = <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mn>1</mn><mo separator="true">,</mo><mi mathvariant="normal">"</mi><mi>a</mi><mi>g</mi><mi>e</mi><mi mathvariant="normal">"</mi><mo>=</mo></mrow><annotation encoding="application/x-tex">1, "age" = </annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8889em;vertical-align:-0.1944em;"></span><span class="mord">1</span><span class="mpunct">,</span><span class="mspace" style="margin-right:0.1667em;"></span><span class="mord">"</span><span class="mord mathnormal">a</span><span class="mord mathnormal" style="margin-right:0.03588em;">g</span><span class="mord mathnormal">e</span><span class="mord">"</span><span class="mspace" style="margin-right:0.2778em;"></span><span class="mrel">=</span></span></span></span>2 where user_id = $3
// The columns can also be given with an array
const columns = ['name', 'age']
await sql`
update users set ${
sql(user, columns)
}
where user_id = ${ user.id }
`
Multiple updates in one query
To create multiple updates in a single query, it is necessary to use arrays instead of objects to ensure that the order of the items correspond with the column names.
const users = [
[1, 'John', 34],
[2, 'Jane', 27],
]
await sql`
update users set name = update_data.name, age = (update_data.age)::int
from (values ${sql(users)}) as update_data (id, name, age)
where users.id = (update_data.id)::int
returning users.id, users.name, users.age
`
where in
Dynamic values and Value lists can also be created dynamically, making where in
queries simple too.
const users = await sql`
select
*
from users
where age in ${ sql([68, 75, 23]) }
`
or
const [{ a, b, c }] = await sql`
select
*
from (values ${ sql(['a', 'b', 'c']) }) as x(a, b, c)
`
Building queries
Postgres.js features a simple dynamic query builder by conditionally appending/omitting query fragments.
It works by nesting sql``
fragments within other sql``
calls or fragments. This allows you to build dynamic queries safely without risking sql injections through usual string concatenation.
Partial queries
const olderThan = x => sql`and age > ${ x }`
const filterAge = true
await sql`
select
*
from users
where name is not null ${
filterAge
? olderThan(50)
: sql``
}
`
// Which results in:
select * from users where name is not null
// Or
select * from users where name is not null and age > 50
Dynamic filters
await sql`
select
*
from users ${
id
? sql`where user_id = ${ id }`
: sql``
}
`
// Which results in:
select * from users
// Or
select * from users where user_id = $1
SQL functions
Using keywords or calling functions dynamically is also possible by using sql``
fragments.
const date = null
await sql`
update users set updated_at = ${ date || sql`now()` }
`
// Which results in:
update users set updated_at = now()
Table names
Dynamic identifiers like table names and column names is also supported like so:
const table = 'users'
, column = 'id'
await sql`
select <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mrow><mi>s</mi><mi>q</mi><mi>l</mi><mo stretchy="false">(</mo><mi>c</mi><mi>o</mi><mi>l</mi><mi>u</mi><mi>m</mi><mi>n</mi><mo stretchy="false">)</mo></mrow><mi>f</mi><mi>r</mi><mi>o</mi><mi>m</mi></mrow><annotation encoding="application/x-tex">{ sql(column) } from </annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:1em;vertical-align:-0.25em;"></span><span class="mord"><span class="mord mathnormal">s</span><span class="mord mathnormal" style="margin-right:0.01968em;">ql</span><span class="mopen">(</span><span class="mord mathnormal">co</span><span class="mord mathnormal" style="margin-right:0.01968em;">l</span><span class="mord mathnormal">u</span><span class="mord mathnormal">mn</span><span class="mclose">)</span></span><span class="mord mathnormal" style="margin-right:0.10764em;">f</span><span class="mord mathnormal">ro</span><span class="mord mathnormal">m</span></span></span></span>{ sql(table) }
`
// Which results in:
select "id" from "users"
Quick primer on interpolation
Here’s a quick oversight over all the ways to do interpolation in a query template string:
Interpolation syntax | Usage | Example |
---|---|---|
${ sql`` } |
for keywords or sql fragments | await sql`SELECT * FROM users ${sql`order by age desc` }` |
${ sql(string) } |
for identifiers | await sql`SELECT * FROM ${sql('table_name')` |
${ sql([] or {}, ...) } |
for helpers | await sql`INSERT INTO users ${sql({ name: 'Peter'})}` |
${ 'somevalue' } |
for values | await sql`SELECT * FROM users WHERE age = ${42}` |
Advanced query methods
Cursors
await sql``.cursor([rows = 1], [fn])
Use cursors if you need to throttle the amount of rows being returned from a query. You can use a cursor either as an async iterable or with a callback function. For a callback function new results won’t be requested until the promise / async callback function has resolved.
callback function
await sql`
select
*
from generate_series(1,4) as x
`.cursor(async([row]) => {
// row = { x: 1 }
await http.request('https://example.com/wat', { row })
})
for await…of
// for await...of
const cursor = sql`select * from generate_series(1,4) as x`.cursor()
for await (const [row] of cursor) {
// row = { x: 1 }
await http.request('https://example.com/wat', { row })
}
A single row will be returned by default, but you can also request batches by setting the number of rows desired in each batch as the first argument to .cursor
:
await sql`
select
*
from generate_series(1,1000) as x
`.cursor(10, async rows => {
// rows = [{ x: 1 }, { x: 2 }, ... ]
await Promise.all(rows.map(row =>
http.request('https://example.com/wat', { row })
))
})
If an error is thrown inside the callback function no more rows will be requested and the outer promise will reject with the thrown error.
You can close the cursor early either by calling break
in the for await...of
loop, or by returning the token sql.CLOSE
from the callback function.
await sql`
select * from generate_series(1,1000) as x
`.cursor(row => {
return Math.random() > 0.9 && sql.CLOSE // or sql.END
})
Instant iteration
await sql``.forEach(fn)
If you want to handle rows returned by a query one by one, you can use .forEach
which returns a promise that resolves once there are no more rows.
await sql`
select created_at, name from events
`.forEach(row => {
// row = { created_at: '2019-11-22T14:22:00Z', name: 'connected' }
})
// No more rows
Query Descriptions
await sql``.describe() -> Result[]
Rather than executing a given query, .describe
will return information utilized in the query process. This information can include the query identifier, column types, etc.
This is useful for debugging and analyzing your Postgres queries. Furthermore, .describe
will give you access to the final generated query string that would be executed.
Rows as Array of Values
sql``.values()
Using .values
will return rows as an array of values for each column, instead of objects.
This can be useful to receive identically named columns, or for specific performance/transformation reasons. The column definitions are still included on the result array, plus access to parsers for each column.
Rows as Raw Array of Buffers
sql``.raw()
Using .raw
will return rows as an array with Buffer
values for each column, instead of objects.
This can be useful for specific performance/transformation reasons. The column definitions are still included on the result array, plus access to parsers for each column.
Queries in Files
await sql.file(path, [args], [options]) -> Result[]
Using a file for a query is also supported with optional parameters to use if the file includes $1, $2, etc
const result = await sql.file('query.sql', ['Murray', 68])
Multiple statements in one query
await sql``.simple()
The postgres wire protocol supports “simple” and “extended” queries. “simple” queries supports multiple statements, but does not support any dynamic parameters. “extended” queries support parameters but only one statement. To use “simple” queries you can use
sql``.simple()
. That will create it as a simple query.
await sql`select 1; select 2;`.simple()
Copy to/from as Streams
Postgres.js supports COPY ...
queries, which are exposed as Node.js streams.
await sql`copy ... from stdin`.writable() -> Writable
import { pipeline } from 'node:stream/promises'
// Stream of users with the default tab delimitated cells and new-line delimitated rows
const userStream = Readable.from([
'Murray\t68\n',
'Walter\t80\n'
])
const query = await sql`copy users (name, age) from stdin`.writable()
await pipeline(userStream, query);
await sql`copy ... to stdout`.readable() -> Readable
Using Stream Pipeline
import { pipeline } from 'node:stream/promises'
import { createWriteStream } from 'node:fs'
const readableStream = await sql`copy users (name, age) to stdout`.readable()
await pipeline(readableStream, createWriteStream('output.tsv'))
// output.tsv content: `Murray\t68\nWalter\t80\n`
for await...of
Using const readableStream = await sql`
copy (
select name, age
from users
where age = 68
) to stdout
`.readable()
for await (const chunk of readableStream) {
// chunk.toString() === `Murray\t68\n`
}
NOTE This is a low-level API which does not provide any type safety. To make this work, you must match your
copy query
parameters correctly to your Node.js stream read or write code. Ensure Node.js stream backpressure is handled correctly to avoid memory exhaustion.
Canceling Queries in Progress
Postgres.js supports, canceling queries in progress. It works by opening a new connection with a protocol level startup message to cancel the current query running on a specific connection. That means there is no guarantee that the query will be canceled, and due to the possible race conditions it might even result in canceling another query. This is fine for long running queries, but in the case of high load and fast queries it might be better to simply ignore results instead of canceling.
const query = sql`select pg_sleep 100`.execute()
setTimeout(() => query.cancel(), 100)
const result = await query
Execute
await sql``.execute()
The lazy Promise implementation in Postgres.js is what allows it to distinguish Nested Fragments from the main outer query. This also means that queries are always executed at the earliest in the following tick. If you have a specific need to execute the query in the same tick, you can call .execute()
Unsafe raw string queries
Advanced unsafe use cases
await sql.unsafe(query, [args], [options]) -> Result[]
If you know what you’re doing, you can use unsafe
to pass any string you’d like to postgres. Please note that this can lead to SQL injection if you’re not careful.
sql.unsafe('select ' + danger + ' from users where id = ' + dragons)
You can also nest sql.unsafe
within a safe sql
expression. This is useful if only part of your fraction has unsafe elements.
const triggerName = 'friend_created'
const triggerFnName = 'on_friend_created'
const eventType = 'insert'
const schema_name = 'app'
const table_name = 'friends'
await sql`
create or replace trigger ${sql(triggerName)}
after <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><mrow><mi>s</mi><mi>q</mi><mi>l</mi><mi mathvariant="normal">.</mi><mi>u</mi><mi>n</mi><mi>s</mi><mi>a</mi><mi>f</mi><mi>e</mi><mo stretchy="false">(</mo><mi>e</mi><mi>v</mi><mi>e</mi><mi>n</mi><mi>t</mi><mi>T</mi><mi>y</mi><mi>p</mi><mi>e</mi><mo stretchy="false">)</mo></mrow><mi>o</mi><mi>n</mi></mrow><annotation encoding="application/x-tex">{sql.unsafe(eventType)} on </annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:1em;vertical-align:-0.25em;"></span><span class="mord"><span class="mord mathnormal">s</span><span class="mord mathnormal" style="margin-right:0.01968em;">ql</span><span class="mord">.</span><span class="mord mathnormal">u</span><span class="mord mathnormal">n</span><span class="mord mathnormal">s</span><span class="mord mathnormal">a</span><span class="mord mathnormal" style="margin-right:0.10764em;">f</span><span class="mord mathnormal">e</span><span class="mopen">(</span><span class="mord mathnormal">e</span><span class="mord mathnormal" style="margin-right:0.03588em;">v</span><span class="mord mathnormal">e</span><span class="mord mathnormal">n</span><span class="mord mathnormal" style="margin-right:0.13889em;">tT</span><span class="mord mathnormal" style="margin-right:0.03588em;">y</span><span class="mord mathnormal">p</span><span class="mord mathnormal">e</span><span class="mclose">)</span></span><span class="mord mathnormal">o</span><span class="mord mathnormal">n</span></span></span></span>{sql.unsafe(`${schema_name}.${table_name}`)}
for each row
execute function ${sql(triggerFnName)}()
`
await sql`
create role friend_service with login password ${sql.unsafe(`'${password}'`)}
`
Transactions
await sql.begin([options = ''], fn) -> fn()
BEGIN / COMMIT Use sql.begin
to start a new transaction. Postgres.js will reserve a connection for the transaction and supply a scoped sql
instance for all transaction uses in the callback function. sql.begin
will resolve with the returned value from the callback function.
BEGIN
is automatically sent with the optional options, and if anything fails ROLLBACK
will be called so the connection can be released and execution can continue.
const [user, account] = await sql.begin(async sql => {
const [user] = await sql`
insert into users (
name
) values (
'Murray'
)
returning *
`
const [account] = await sql`
insert into accounts (
user_id
) values (
${ user.user_id }
)
returning *
`
return [user, account]
})
Do note that you can often achieve the same result using WITH
queries (Common Table Expressions) instead of using transactions.
It’s also possible to pipeline the requests in a transaction if needed by returning an array with queries from the callback function like this:
const result = await sql.begin(sql => [
sql`update ...`,
sql`update ...`,
sql`insert ...`
])
await sql.savepoint([name], fn) -> fn()
SAVEPOINT sql.begin('read write', async sql => {
const [user] = await sql`
insert into users (
name
) values (
'Murray'
)
`
const [account] = (await sql.savepoint(sql =>
sql`
insert into accounts (
user_id
) values (
${ user.user_id }
)
`
).catch(err => {
// Account could not be created. ROLLBACK SAVEPOINT is called because we caught the rejection.
})) || []
return [user, account]
})
.then(([user, account]) => {
// great success - COMMIT succeeded
})
.catch(() => {
// not so good - ROLLBACK was called
})
await sql.prepare([name]) -> fn()
PREPARE TRANSACTION Indicates that the transactions should be prepared using the PREPARE TRANSACTION [NAME]
statement
instead of being committed.
sql.begin('read write', async sql => {
const [user] = await sql`
insert into users (
name
) values (
'Murray'
)
`
await sql.prepare('tx1')
})
Data Transformation
Postgres.js allows for transformation of the data passed to or returned from a query by using the transform
option.
Built in transformation functions are:
- For camelCase -
postgres.camel
,postgres.toCamel
,postgres.fromCamel
- For PascalCase -
postgres.pascal
,postgres.toPascal
,postgres.fromPascal
- For Kebab-Case -
postgres.kebab
,postgres.toKebab
,postgres.fromKebab
These built in transformations will only convert to/from snake_case. For example, using { transform: postgres.toCamel }
will convert the column names to camelCase only if the column names are in snake_case to begin with. { transform: postgres.fromCamel }
will convert camelCase only to snake_case.
By default, using postgres.camel
, postgres.pascal
and postgres.kebab
will perform a two-way transformation - both the data passed to the query and the data returned by the query will be transformed:
// Transform the column names to and from camel case
const sql = postgres({ transform: postgres.camel })
await sql`CREATE TABLE IF NOT EXISTS camel_case (a_test INTEGER, b_test TEXT)`
await sql`INSERT INTO camel_case ${ sql([{ aTest: 1, bTest: 1 }]) }`
const data = await sql`SELECT ${ sql('aTest', 'bTest') } FROM camel_case`
console.log(data) // [ { aTest: 1, bTest: '1' } ]
To only perform half of the transformation (eg. only the transformation to or from camel case), use the other transformation functions:
// Transform the column names only to camel case
// (for the results that are returned from the query)
postgres({ transform: postgres.toCamel })
await sql`CREATE TABLE IF NOT EXISTS camel_case (a_test INTEGER)`
await sql`INSERT INTO camel_case ${ sql([{ a_test: 1 }]) }`
const data = await sql`SELECT a_test FROM camel_case`
console.log(data) // [ { aTest: 1 } ]
// Transform the column names only from camel case
// (for interpolated inserts, updates, and selects)
const sql = postgres({ transform: postgres.fromCamel })
await sql`CREATE TABLE IF NOT EXISTS camel_case (a_test INTEGER)`
await sql`INSERT INTO camel_case ${ sql([{ aTest: 1 }]) }`
const data = await sql`SELECT ${ sql('aTest') } FROM camel_case`
console.log(data) // [ { a_test: 1 } ]
Note that Postgres.js does not rewrite the static parts of the tagged template strings. So to transform column names in your queries, the
sql()
helper must be used - eg.${ sql('columnName') }
as in the examples above.
undefined
Values
Transform By default, Postgres.js will throw the error UNDEFINED_VALUE: Undefined values are not allowed
when undefined values are passed
// Transform the column names to and from camel case
const sql = postgres({
transform: {
undefined: null
}
})
await sql`CREATE TABLE IF NOT EXISTS transform_undefined (a_test INTEGER)`
await sql`INSERT INTO transform_undefined ${ sql([{ a_test: undefined }]) }`
const data = await sql`SELECT a_test FROM transform_undefined`
console.log(data) // [ { a_test: null } ]
To combine with the built in transform functions, spread the transform in the transform
object:
// Transform the column names to and from camel case
const sql = postgres({
transform: {
...postgres.camel,
undefined: null
}
})
await sql`CREATE TABLE IF NOT EXISTS transform_undefined (a_test INTEGER)`
await sql`INSERT INTO transform_undefined ${ sql([{ aTest: undefined }]) }`
const data = await sql`SELECT ${ sql('aTest') } FROM transform_undefined`
console.log(data) // [ { aTest: null } ]
Custom Transform Functions
To specify your own transformation functions, you can use the column
, value
and row
options inside of transform
, each an object possibly including to
and from
keys:
to
: The function to transform the outgoing query column name to, i.eSELECT ${ sql('aName') }
toSELECT a_name
when usingpostgres.toCamel
.from
: The function to transform the incoming query result column name to, see example below.
Both parameters are optional, if not provided, the default transformation function will be used.
// Implement your own functions, look at postgres.toCamel, etc
// as a reference:
// https://github.com/porsager/postgres/blob/4241824ffd7aa94ffb482e54ca9f585d9d0a4eea/src/types.js#L310-L328
function transformColumnToDatabase() { /* ... */ }
function transformColumnFromDatabase() { /* ... */ }
const sql = postgres({
transform: {
column: {
to: transformColumnToDatabase,
from: transformColumnFromDatabase,
},
value: { /* ... */ },
row: { /* ... */ }
}
})
Listen & notify
When you call .listen
, a dedicated connection will be created to ensure that you receive notifications instantly. This connection will be used for any further calls to .listen
. The connection will automatically reconnect according to a backoff reconnection pattern to not overload the database server.
await sql.listen(channel, onnotify, [onlisten]) -> { state }
Listen .listen
takes the channel name, a function to handle each notify, and an optional function to run every time listen is registered and ready (happens on initial connect and reconnects). It returns a promise which resolves once the LISTEN
query to Postgres completes, or if there is already a listener active.
await sql.listen('news', payload => {
const json = JSON.parse(payload)
console.log(json.this) // logs 'is'
})
The optional onlisten
method is great to use for a very simply queue mechanism:
await sql.listen(
'jobs',
(x) => run(JSON.parse(x)),
( ) => sql`select unfinished_jobs()`.forEach(run)
)
function run(job) {
// And here you do the work you please
}
await sql.notify(channel, payload) -> Result[]
Notify Notify can be done as usual in SQL, or by using the sql.notify
method.
sql.notify('news', JSON.stringify({ no: 'this', is: 'news' }))
Realtime subscribe
Postgres.js implements the logical replication protocol of PostgreSQL to support subscription to real-time updates of insert
, update
and delete
operations.
NOTE To make this work you must create the proper publications in your database, enable logical replication by setting
wal_level = logical
inpostgresql.conf
and connect using either a replication or superuser.
Quick start
Create a publication (eg. in migration)
CREATE PUBLICATION alltables FOR ALL TABLES
Subscribe to updates
const sql = postgres({ publications: 'alltables' })
const { unsubscribe } = await sql.subscribe(
'insert:events',
(row, { command, relation, key, old }) => {
// Callback function for each row change
// tell about new event row over eg. websockets or do something else
},
() => {
// Callback on initial connect and potential reconnects
}
)
Subscribe pattern
You can subscribe to specific operations, tables, or even rows with primary keys.
operation
:
schema
.
table
=
primary_key
operation
is one of * | insert | update | delete
and defaults to *
schema
defaults to public
table
is a specific table name and defaults to *
primary_key
can be used to only subscribe to specific rows
Examples
sql.subscribe('*', () => /* everything */ )
sql.subscribe('insert', () => /* all inserts */ )
sql.subscribe('*:users', () => /* all operations on the public.users table */ )
sql.subscribe('delete:users', () => /* all deletes on the public.users table */ )
sql.subscribe('update:users=1', () => /* all updates on the users row with a primary key = 1 */ )
Numbers, bigint, numeric
Number
in javascript is only able to represent 253-1 safely which means that types in PostgreSQLs like bigint
and numeric
won’t fit into Number
.
Since Node.js v10.4 we can use BigInt
to match the PostgreSQL type bigint
which is returned for eg. count(*)
. Unfortunately, it doesn’t work with JSON.stringify
out of the box, so Postgres.js will return it as a string.
If you want to use BigInt
you can add this custom type:
const sql = postgres({
types: {
bigint: postgres.BigInt
}
})
There is currently no guaranteed way to handle numeric
/ decimal
types in native Javascript. These [and similar] types will be returned as a string
. The best way in this case is to use custom types.
Result Array
The Result
Array returned from queries is a custom array allowing for easy destructuring or passing on directly to JSON.stringify or general Array usage. It includes the following properties.
.count
The count
property is the number of affected rows returned by the database. This is useful for insert, update and delete operations to know the number of rows since .length will be 0 in these cases if not using RETURNING ...
.
.command
The command
run by the query - eg. one of SELECT
, UPDATE
, INSERT
, DELETE
.columns
The columns
returned by the query useful to determine types, or map to the result values when using .values()
{
name : String, // Column name,
type : oid, // PostgreSQL oid column type
parser: Function // The function used by Postgres.js for parsing
}
.statement
The statement
contains information about the statement implicitly created by Postgres.js.
{
name : String, // The auto generated statement name
string : String, // The actual query string executed
types : [oid], // An array of oid expected as input parameters
columns : [Column] // Array of columns - same as Result.columns
}
.state
This is the state { pid, secret }
of the connection that executed the query.
Connection details
All Postgres options
const sql = postgres('postgres://username:password@host:port/database', {
host : '', // Postgres ip address[es] or domain name[s]
port : 5432, // Postgres server port[s]
path : '', // unix socket path (usually '/tmp')
database : '', // Name of database to connect to
username : '', // Username of database user
password : '', // Password of database user
ssl : false, // true, prefer, require, tls.connect options
max : 10, // Max number of connections
max_lifetime : null, // Max lifetime in seconds (more info below)
idle_timeout : 0, // Idle connection timeout in seconds
connect_timeout : 30, // Connect timeout in seconds
prepare : true, // Automatic creation of prepared statements
types : [], // Array of custom types, see more below
onnotice : fn, // Default console.log, set false to silence NOTICE
onparameter : fn, // (key, value) when server param change
debug : fn, // Is called with (connection, query, params, types)
socket : fn, // fn returning custom socket to use
transform : {
undefined : undefined, // Transforms undefined values (eg. to null)
column : fn, // Transforms incoming column names
value : fn, // Transforms incoming row values
row : fn // Transforms entire rows
},
connection : {
application_name : 'postgres.js', // Default application_name
... // Other connection parameters, see https://www.postgresql.org/docs/current/runtime-config-client.html
},
target_session_attrs : null, // Use 'read-write' with multiple hosts to
// ensure only connecting to primary
fetch_types : true, // Automatically fetches types on connect
// on initial connection.
})
Note that max_lifetime = 60 * (30 + Math.random() * 30)
by default. This resolves to an interval between 30 and 60 minutes to optimize for the benefits of prepared statements and working nicely with Linux’s OOM killer.
Dynamic passwords
When clients need to use alternative authentication schemes such as access tokens or connections to databases with rotating passwords, provide either a synchronous or asynchronous function that will resolve the dynamic password value at connection time.
const sql = postgres(url, {
// Other connection config
...
// Password function for the database user
password : async () => await signer.getAuthToken(),
})
SSL
Although vulnerable to MITM attacks, a common configuration for the ssl
option for some cloud providers is to set rejectUnauthorized
to false
(if NODE_ENV
is production
):
const sql =
process.env.NODE_ENV === 'production'
? // "Unless you're using a Private or Shield Heroku Postgres database, Heroku Postgres does not currently support verifiable certificates"
// https://help.heroku.com/3DELT3RK/why-can-t-my-third-party-utility-connect-to-heroku-postgres-with-ssl
postgres({ ssl: { rejectUnauthorized: false } })
: postgres()
For more information regarding ssl
with postgres
, check out the Node.js documentation for tls.
Multi-host connections - High Availability (HA)
Multiple connection strings can be passed to postgres()
in the form of postgres('postgres://localhost:5432,localhost:5433', ...)
. This works the same as native the psql
command. Read more at multiple host URIs.
Connections will be attempted in order of the specified hosts/ports. On a successful connection, all retries will be reset. This ensures that hosts can come up and down seamlessly.
If you specify target_session_attrs: 'primary'
or PGTARGETSESSIONATTRS=primary
Postgres.js will only connect to the primary host, allowing for zero downtime failovers.
The Connection Pool
Connections are created lazily once a query is created. This means that simply doing const sql = postgres(...)
won’t have any effect other than instantiating a new sql
instance.
No connection will be made until a query is made.
For example:
const sql = postgres() // no connections are opened
await sql`...` // one connection is now opened
await sql`...` // previous opened connection is reused
// two connections are opened now
await Promise.all([
sql`...`,
sql`...`
])
When there are high amount of concurrent queries,
postgres
will open as many connections as needed up untilmax
number of connections is reached. By defaultmax
is 10. This can be changed by settingmax
in thepostgres()
call. Example -postgres('connectionURL', { max: 20 })
.
This means that we get a much simpler story for error handling and reconnections. Queries will be sent over the wire immediately on the next available connection in the pool. Connections are automatically taken out of the pool if you start a transaction using sql.begin()
, and automatically returned to the pool once your transaction is done.
Any query which was already sent over the wire will be rejected if the connection is lost. It’ll automatically defer to the error handling you have for that query, and since connections are lazy it’ll automatically try to reconnect the next time a query is made. The benefit of this is no weird generic “onerror” handler that tries to get things back to normal, and also simpler application code since you don’t have to handle errors out of context.
There are no guarantees about queries executing in order unless using a transaction with sql.begin()
or setting max: 1
. Of course doing a series of queries, one awaiting the other will work as expected, but that’s just due to the nature of js async/promise handling, so it’s not necessary for this library to be concerned with ordering.
Since this library automatically creates prepared statements, it also has a default max lifetime for connections to prevent memory bloat on the database itself. This is a random interval for each connection between 45 and 90 minutes. This allows multiple connections to independently come up and down without affecting the service.
Connection timeout
By default, connections will not close until .end()
is called. However, it may be useful to have them close automatically when:
- re-instantiating multiple
sql``
instances - using Postgres.js in a Serverless environment (Lambda, etc.)
- using Postgres.js with a database service that automatically closes connections after some time (see
ECONNRESET
issue)
This can be done using the idle_timeout
or max_lifetime
options. These configuration options specify the number of seconds to wait before automatically closing an idle connection and the maximum time a connection can exist, respectively.
For example, to close a connection that has either been idle for 20 seconds or existed for more than 30 minutes:
const sql = postgres({
idle_timeout: 20,
max_lifetime: 60 * 30
})
Cloudflare Workers support
Postgres.js has built-in support for the TCP socket API in Cloudflare Workers, which is on-track to be standardized and adopted in Node.js and other JavaScript runtimes, such as Deno.
You can use Postgres.js directly in a Worker, or to benefit from connection pooling and query caching, via the Hyperdrive service available to Workers by passing the Hyperdrive connectionString
when creating a new postgres
client as follows:
// Requires Postgres.js 3.4.0 or later
import postgres from 'postgres'
interface Env {
HYPERDRIVE: Hyperdrive;
}
export default async fetch(req: Request, env: Env, ctx: ExecutionContext) {
// The Postgres.js library accepts a connection string directly
const sql = postgres(env.HYPERDRIVE.connectionString)
const results = await sql`SELECT * FROM users LIMIT 10`
return Response.json(results)
}
In wrangler.toml
you will need to enable the nodejs_compat
compatibility flag to allow Postgres.js to operate in the Workers environment:
compatibility_flags = ["nodejs_compat"]
Auto fetching of array types
Postgres.js will automatically fetch table/array-type information when it first connects to a database.
If you have revoked access to pg_catalog
this feature will no longer work and will need to be disabled.
You can disable this feature by setting fetch_types
to false
.
Environmental variables
It is also possible to connect to the database without a connection string or any options. Postgres.js will fall back to the common environment variables used by psql
as in the table below:
const sql = postgres()
Option | Environment Variables |
---|---|
host |
PGHOST |
port |
PGPORT |
database |
PGDATABASE |
username |
PGUSERNAME or PGUSER |
password |
PGPASSWORD |
idle_timeout |
PGIDLE_TIMEOUT |
connect_timeout |
PGCONNECT_TIMEOUT |
Prepared statements
Prepared statements will automatically be created for any queries where it can be inferred that the query is static. This can be disabled by using the prepare: false
option. For instance — this is useful when using PGBouncer in transaction mode
.
Custom Types
You can add ergonomic support for custom types, or simply use sql.typed(value, type)
inline, where type is the PostgreSQL oid
for the type and the correctly serialized string. (oid
values for types can be found in the pg_catalog.pg_type
table.)
Adding Query helpers is the cleanest approach which can be done like this:
const sql = postgres({
types: {
rect: {
// The pg_types oid to pass to the db along with the serialized value.
to : 1337,
// An array of pg_types oids to handle when parsing values coming from the db.
from : [1337],
//Function that transform values before sending them to the db.
serialize : ({ x, y, width, height }) => [x, y, width, height],
// Function that transforms values coming from the db.
parse : ([x, y, width, height]) => { x, y, width, height }
}
}
})
// Now you can use sql.typed.rect() as specified above
const [custom] = await sql`
insert into rectangles (
name,
rect
) values (
'wat',
${ sql.typed.rect({ x: 13, y: 37, width: 42, height: 80 }) }
)
returning *
`
// custom = { name: 'wat', rect: { x: 13, y: 37, width: 42, height: 80 } }
Custom socket
Easily do in-process ssh tunneling to your database by providing a custom socket for Postgres.js to use. The function (optionally async) must return a socket-like duplex stream.
Here’s a sample using ssh2
import ssh2 from 'ssh2'
const sql = postgres({
...options,
socket: ({ host: [host], port: [port] }) => new Promise((resolve, reject) => {
const ssh = new ssh2.Client()
ssh
.on('error', reject)
.on('ready', () =>
ssh.forwardOut('127.0.0.1', 12345, host, port,
(err, socket) => err ? reject(err) : resolve(socket)
)
)
.connect(sshOptions)
})
})
Teardown / Cleanup
To ensure proper teardown and cleanup on server restarts use await sql.end()
before process.exit()
.
Calling sql.end()
will reject new queries and return a Promise which resolves when all queries are finished and the underlying connections are closed. If a { timeout }
option is provided any pending queries will be rejected once the timeout (in seconds) is reached and the connections will be destroyed.
Prexit
Sample shutdown usingimport prexit from 'prexit'
prexit(async () => {
await sql.end({ timeout: 5 })
await new Promise(r => server.close(r))
})
Reserving connections
await sql.reserve()
The reserve
method pulls out a connection from the pool, and returns a client that wraps the single connection. This can be used for running queries on an isolated connection.
const reserved = await sql.reserve()
await reserved`select * from users`
await reserved.release()
reserved.release()
Once you have finished with the reserved connection, call release
to add it back to the pool.
Error handling
Errors are all thrown to related queries and never globally. Errors coming from database itself are always in the native Postgres format, and the same goes for any Node.js errors eg. coming from the underlying connection.
Query errors will contain a stored error with the origin of the query to aid in tracing errors.
Query errors will also contain the query
string and the parameters
. These are not enumerable to avoid accidentally leaking confidential information in logs. To log these it is required to specifically access error.query
and error.parameters
, or set debug: true
in options.
There are also the following errors specifically for this library.
UNSAFE_TRANSACTION
Only use sql.begin or max: 1
To ensure statements in a transaction runs on the same connection (which is required for them to run inside the transaction), you must use sql.begin(...)
or only allow a single connection in options (max: 1
).
UNDEFINED_VALUE
Undefined values are not allowed
Postgres.js won’t accept undefined
as values in tagged template queries since it becomes ambiguous what to do with the value. If you want to set something to null, use null
explicitly.
MESSAGE_NOT_SUPPORTED
X (X) is not supported
Whenever a message is received from Postgres which is not supported by this library. Feel free to file an issue if you think something is missing.
MAX_PARAMETERS_EXCEEDED
Max number of parameters (65534) exceeded
The postgres protocol doesn’t allow more than 65534 (16bit) parameters. If you run into this issue there are various workarounds such as using sql([...])
to escape values instead of passing them as parameters.
SASL_SIGNATURE_MISMATCH
Message type X not supported
When using SASL authentication the server responds with a signature at the end of the authentication flow which needs to match the one on the client. This is to avoid man-in-the-middle attacks. If you receive this error the connection was canceled because the server did not reply with the expected signature.
NOT_TAGGED_CALL
Query not called as a tagged template literal
Making queries has to be done using the sql function as a tagged template. This is to ensure parameters are serialized and passed to Postgres as query parameters with correct types and to avoid SQL injection.
AUTH_TYPE_NOT_IMPLEMENTED
Auth type X not implemented
Postgres supports many different authentication types. This one is not supported.
CONNECTION_CLOSED
write CONNECTION_CLOSED host:port
This error is thrown if the connection was closed without an error. This should not happen during normal operations, so please create an issue if this was unexpected.
CONNECTION_ENDED
write CONNECTION_ENDED host:port
This error is thrown if the user has called sql.end()
and performed a query afterward.
CONNECTION_DESTROYED
write CONNECTION_DESTROYED host:port
This error is thrown for any queries that were pending when the timeout to sql.end({ timeout: X })
was reached.
CONNECTION_CONNECT_TIMEOUT
write CONNECTION_CONNECT_TIMEOUT host:port
This error is thrown if the startup phase of the connection (tcp, protocol negotiation, and auth) took more than the default 30 seconds or what was specified using connect_timeout
or PGCONNECT_TIMEOUT
.
TypeScript support
postgres
has TypeScript support. You can pass a row list type for your queries in this way:
interface User {
id: number
name: string
}
const users = await sql<User[]>`SELECT * FROM users`
users[0].id // ok => number
users[1].name // ok => string
users[0].invalid // fails: `invalid` does not exists on `User`
However, be sure to check the array length to avoid accessing properties of undefined
rows:
const users = await sql<User[]>`SELECT * FROM users WHERE id = ${id}`
if (!users.length)
throw new Error('Not found')
return users[0]
You can also prefer destructuring when you only care about a fixed number of rows.
In this case, we recommend you to prefer using tuples to handle undefined
properly:
const [user]: [User?] = await sql`SELECT * FROM users WHERE id = ${id}`
if (!user) // => User | undefined
throw new Error('Not found')
return user // => User
// NOTE:
const [first, second]: [User?] = await sql`SELECT * FROM users WHERE id = ${id}` // fails: `second` does not exist on `[User?]`
const [first, second] = await sql<[User?]>`SELECT * FROM users WHERE id = ${id}` // don't fail : `second: User | undefined`
We do our best to type all the public API, however types are not always updated when features are added or changed. Feel free to open an issue if you have trouble with types.
Migration tools
Postgres.js doesn’t come with any migration solution since it’s way out of scope, but here are some modules that support Postgres.js for migrations:
- https://github.com/porsager/postgres-shift
- https://github.com/lukeed/ley
- https://github.com/JAForbes/pgmg
Thank you
A really big thank you to @JAForbes who introduced me to Postgres and still holds my hand navigating all the great opportunities we have.
Thanks to @ACXgit for initial tests and dogfooding.
Also thanks to Ryan Dahl for letting me have the postgres
npm package name.