PostgreSQL Sink Options
RisingWave provides two ways to sink data to PostgreSQL:- JDBC Connector (
connector = 'jdbc'
) - Uses JDBC for database connectivity - Native PostgreSQL Connector (
connector = 'postgres'
) - Uses native PostgreSQL protocol
Prerequisites
You can test out this process on your own device by using thepostgres-sink
demo in the integration_test directory of the RisingWave repository.
Set up a PostgreSQL database
- AWS RDS
- Self-hosted
Set up a PostgreSQL RDS instance on AWS
Here we will use a standard class instance without Multi-AZ deployment as an example.- Log in to the AWS console. Search “RDS” in services and select the RDS panel.
- Create a database with PostgreSQL as the Engine type. We recommend setting up a username and password or using other security options.
- When the new instance becomes available, click on its panel.
- From the Connectivity panel, we can find the endpoint and connection port information.

Connect to the RDS instance from Postgres
Now we can connect to the RDS instance. Make sure you have installed psql on your local machine, and start a psql prompt. Fill in the endpoint, the port, and login credentials in the connection parameters.Create a table in PostgreSQL
Use the following query to set up a table in PostgreSQL. We will sink to this table from RisingWave.Set up RisingWave
Install and launch RisingWave
To install and start RisingWave locally, see the Get started guide. We recommend running RisingWave locally for testing purposes.Notes about running RisingWave from binaries
If you are running RisingWave locally from binaries and intend to use the native CDC source connectors or the JDBC sink connector, make sure you have JDK 11 or later versions installed in your environment.Create a sink
Syntax
Parameters (JDBC)
Parameter or clause | Description |
---|---|
sink_name | Name of the sink to be created. |
sink_from | A clause that specifies the direct source from which data will be output. sink_from can be a materialized view or a table. Either this clause or a SELECT query must be specified. |
AS select_query | A SELECT query that specifies the data to be output to the sink. Either this query or a FROM clause must be specified. See SELECT for the syntax and examples of the SELECT command. |
connector | Sink connector should be jdbc . To switch from jdbc to postgres , set stream_switch_jdbc_pg_to_native = true under [streaming.developer] . |
jdbc.url | Required. The JDBC URL of the destination database necessary for the driver to recognize and connect to the database. |
user | The user name for the database connection. |
password | The password for the database connection. |
jdbc.query.timeout | Specifies the timeout for the operations to downstream. If not set, the default is 60s. |
jdbc.auto.commit | Controls whether to automatically commit transactions for JDBC sink. If not set, the default is false. |
table.name | Required. The table in the destination database you want to sink to. |
schema.name | The schema in the destination database you want to sink to. The default value is public. |
type | Sink data type. Supported types:
|
primary_key | Required if type is upsert. The primary key of the sink, which should match the primary key of the downstream table. |
Parameters (Postgres Native)
RisingWave introduced the native Postgres sink connector in version 2.2, and the JDBC sink connector for Postgres will be deprecated in a future release. You can try it in-place for your JDBC sinks by setting
stream_switch_jdbc_pg_to_native = true
under [streaming.developer]
.Parameter or clause | Description |
---|---|
sink_name | Name of the sink to be created. |
sink_from | A clause that specifies the direct source from which data will be output. sink_from can be a materialized view or a table. Either this clause or a SELECT query must be specified. |
AS select_query | A SELECT query that specifies the data to be output to the sink. Either this query or a FROM clause must be specified. See SELECT for the syntax and examples of the SELECT command. |
connector | Sink connector must be postgres . |
user | The user name for the database connection. |
password | The password for the database connection. |
database | Required. The database in the destination database you want to sink to. |
table | Required. The table in the destination database you want to sink to. |
schema | The schema in the destination database you want to sink to. The default value is public. |
type | Sink data type. Supported types:
|
primary_key | Required if type is upsert. The primary key of the sink, which should match the primary key of the downstream table. |
ssl_mode | The ssl.mode parameter determines the level of SSL/TLS encryption for secure communication with Postgres. Accepted values are disabled , preferred , required , verify-ca , and verify-full . The default value is disabled .
|
ssl_root_cert | Specify the root certificate secret. You must create secret first and then use it here. |
Sink data from RisingWave to PostgreSQL
Create source and materialized view
You can sink data from a table or a materialized view in RisingWave to PostgreSQL. For demonstration purposes, we’ll create a source and a materialized view, and then sink data from the materialized view. If you already have a table or materialized view to sink data from, you don’t need to perform this step. Run the following query to create a source to read data from a Kafka broker.target_id
. Note that the materialized view and the target table share the same schema.
Sink from RisingWave
You can use either the JDBC connector or the native PostgreSQL connector to sink data to PostgreSQL.Option 1: Native PostgreSQL Connector (Recommended for PostgreSQL-specific features)
Use the native PostgreSQL connector for better performance and PostgreSQL-specific features:Option 2: JDBC Connector (Recommended for compatibility)
Use the JDBC connector for broader compatibility with PostgreSQL deployments: Use the following query to sink data from the materialized view to the target table in PostgreSQL. Ensure that thejdbc_url
is accurate and reflects the PostgreSQL database that you are connecting to. See CREATE SINK for more details.
Verify update
To ensure that the target table has been updated, query fromtarget_count
in PostgreSQL.
Advanced configurations
Native PostgreSQL Connector Parameters
Parameter | Description | Required | Default |
---|---|---|---|
host | PostgreSQL server hostname | Yes | |
port | PostgreSQL port number | No | 5432 |
user | PostgreSQL username | Yes | |
password | PostgreSQL password | Yes | |
database | Target database name | Yes | |
table | Target table name | Yes | |
schema | Target schema name | No | public |
type | Sink type: append-only or upsert | Yes | |
ssl_mode | SSL mode: disable, require, prefer | No | prefer |
ssl.root.cert | SSL root certificate path | No | |
max_batch_rows | Maximum rows per batch | No | 1024 |
JDBC Connector Parameters
Parameter | Description | Required |
---|---|---|
jdbc.url | JDBC connection URL | Yes |
user | Database username | Yes |
password | Database password | Yes |
table.name | Target table name | Yes |
type | Sink type: append-only or upsert | Yes |
primary_key | Primary key columns | No |
Configuration examples
Native PostgreSQL with SSL
JDBC with connection pooling
Batch optimized configuration
Cross-schema sink
Common issues and solutions
- Connection timeouts: Increase timeout values or check network connectivity
- Authentication failures: Verify credentials and pg_hba.conf settings
- Table lock conflicts: Monitor for long-running transactions
- Disk space issues: Monitor PostgreSQL data directory space
- Memory issues: Check shared_buffers and work_mem settings
- SSL certificate errors: Verify certificate configuration for secure connections
Performance optimization
- Batch size: Use optimal batch size (100-1000 recommended)
- Indexes: Ensure proper indexes on target tables
- Connection pooling: Configure appropriate connection pool settings
- VACUUM and ANALYZE: Regular maintenance for optimal performance
- WAL configuration: Optimize write-ahead logging for write-heavy workloads
Security best practices
Authentication and authorization
SSL/TLS encryption
Limitations
- Transaction size: Large transactions may impact performance
- Connection limits: Subject to PostgreSQL max_connections limit
- Lock contention: Concurrent writes may cause lock conflicts
- Data type constraints: Some RisingWave types may require conversion
- Network latency: Performance depends on network connectivity
- Schema changes: Limited support for automatic schema migration
PostgreSQL has specific limits and performance characteristics. Monitor your database performance and configure appropriate resources for your workload.
Use appropriate indexes, batch sizes, and connection pooling for optimal performance. Regular maintenance (VACUUM, ANALYZE) is important for long-running systems.
Ensure proper authentication and network security. Use SSL/TLS for production deployments and implement proper access controls and monitoring.
- A
varchar
column in RisingWave can be sinked to auuid
column in Postgres. - Only one-dimensional arrays in RisingWave can be sinked to PostgreSQL.
- For array type, we only support
smallint
,integer
,bigint
,real
,double precision
, andvarchar
type now.