v2.1.0 · Java library · Apache 2.0

Replace your migration glue code
with YAML.

One Maven dependency. Your BatchConfiguration.java becomes three YAML files. Oracle → PostgreSQL in production.

By Pulsaride Solutions · Apache 2.0 · Spring Boot 3 · Java 21

application.yml
# application.ymlspring:  datasource:    url:      ${POSTGRES_URL}    username: ${POSTGRES_USER}    password: ${POSTGRES_PASSWORD} pulsaride:  migration:    source-table: PRODUCTS    target-table: products    transform:    migration/products.yaml    mode:         SIMPLE

The problem with custom BatchConfiguration.java

Every Java team has one. It started at 50 lines. It's 300 now. It's not the migration — it's everything around it.

01

Happy-path code only

Your BatchConfig.java covers the normal case. The FK violation at row 847, the network timeout at 2 am, the re-run that doubled counts — those were handled in Slack, not in code.

02

No clean restart story

Ask a teammate: if the process dies at row 50,000, does it resume from the checkpoint or start over? If they have to read the code to answer, you don't have a restart story.

03

FK ordering is your responsibility

You hand-wrote the migration step order based on your schema knowledge. When the schema changes, that order needs to change too — and there is nothing to enforce it.

Auto-configuration replaces the boilerplate

The 3 Java files that remain: App.java, DataSourceConfig.java, MigrationRunner.java — 35 lines total.

BEFORE
AFTER
— BEFOREBatchConfiguration.java
@Configurationpublic class BatchConfiguration {   @Bean  public Step migrateStep(JobRepository repo,      PlatformTransactionManager txm,      OracleItemReader reader,      PulsarideItemProcessor processor,      PostgresItemWriter writer,      MigrationSkipListener skipListener,      RetryPolicy retryPolicy) {    return new StepBuilder("migrateStep", repo)      .<Row, Row>chunk(500, txm)      .reader(reader).processor(processor)      .writer(writer).faultTolerant()      .retry(TransientDataAccessException.class)      .retryLimit(3)      .skip(DataIntegrityViolationException.class)      .skipLimit(100)      .listener(skipListener)      .build();  }   // + migrationJob, deferredStep, dlqStep  // + OracleItemReader, PostgresItemWriter  // + MigrationSkipListener, FK order logic  // ≈ 280 more lines}
+ AFTERapplication.yml
# application.yml — the only config you writespring:  datasource:    url:      ${POSTGRES_URL}    username: ${POSTGRES_USER}    password: ${POSTGRES_PASSWORD} pulsaride:  migration:    source-table: PRODUCTS    target-table: products    transform:    migration/products.yaml    mode:         SIMPLE
+ AFTERmigration/products.yaml
# migration/products.yaml — transform rulesname: products-migrationtarget_table: productsreject_policy: FAIL_ROW depends_on:  - table: categories    key: category_id    via: p.CATEGORY_ID    on_missing: DEFER sources:  - name: p    table: PRODUCTSfields:  - name: id    source: p.PRODUCT_ID  - name: name    expression: "trim(?1)"    source: p.NAME    expect: "NOT NULL"  - name: price    source: p.PRICE    target_type: decimal(10,2)    expect: "> 0"filter: "p.ACTIVE = 1"

Everything the glue code was supposed to do

The capabilities your hand-rolled BatchConfig.java never quite had.

Full Load & CDC

SIMPLE for small tables, partitioned SCALE for large ones. AUTO selects based on row count. CDC mode uses Debezium with per-table circuit breakers and auto-replay.

FK-Aware Execution

depends_on defers rows when their FK target hasn't arrived yet. Deferred rows replay automatically on dependency completion. No step ordering, no silent drops.

Spring Auto-Configuration

BatchAutoConfiguration is excluded automatically. MigrationEngine activates from application.yml. Zero @Bean definitions in your project.

Live Monitoring

WebSocket dashboard at :8080/pulsaride. Run log, DLQ depth, per-table progress, alert thresholds. Spring Actuator health at /actuator/health/pulsaride.

Data Quality

expect: assertions on any field. Failed rows land in pulsaride_dlq with the source payload and rejection reason. Row-level diff report on every run.

Restartable by Default

Every run checkpoints at the batch level. A crash at row 50,000 resumes from the last committed batch — not from row zero. Config drift detection blocks unsafe re-runs.

A real migration, ready to clone

example-csv loads a CSV file into a PostgreSQL staging table, then Pulsaride transforms and validates 30 clean rows into the target — intentionally rejecting 4 rows with DQ failures to the DLQ. Three Java files. Zero business logic.

migration/products.yamlTransform + DQ rules
src/.../application.ymlZero-code config
schema/01-init.sqlStaging + target DDL
data/products.csv30 clean + 4 DQ failures
Spring Boot 3PostgreSQL 16Docker ComposeJava 21
View example-csv on GitLab ↗

The complete migration rules file for this example:

migration/products.yaml
name: products-csv-loadversion: "1.0"target_table: products sources:  - name: p    type: staging-table    table: products_raw fields:  - name: id    source: p.id  - name: sku    expression: "trim(?1)"    source: p.sku    expect: "IS NOT NULL"  - name: name    expression: "trim(?1)"    source: p.name    expect: "IS NOT NULL"  - name: price    source: p.price    target_type: DECIMAL    expect: "> 0"  - name: stock_quantity    source: p.stock_quantity    target_type: INTEGER    expect: ">= 0" filter: "sku IS NOT NULL AND name IS NOT NULL"reject_policy: SKIP

From zero to a running migration

One Maven dependency. Two YAML files. docker compose up. Done.

01

Add the Maven dependency to your pom.xml

pom.xml
<dependency>  <groupId>com.pulsaride</groupId>  <artifactId>pulsaride-transform</artifactId>  <version>2.1.0</version></dependency>
02

Configure your database connections

application.yml
spring:  datasource:    url:      ${POSTGRES_URL}    username: ${POSTGRES_USER}    password: ${POSTGRES_PASSWORD} pulsaride:  migration:    source-table: PRODUCTS    target-table: products    transform:    migration/products.yaml    mode:         SIMPLE
03

Declare your first table mapping

migration/products.yaml
name: products-migrationversion: "1.0"target_table: products sources:  - name: p    table: PRODUCTS fields:  - name: id    source: p.PRODUCT_ID  - name: name    expression: "trim(?1)"    source: p.PRODUCT_NAME    expect: "IS NOT NULL"  - name: price    source: p.UNIT_PRICE    target_type: decimal(10,2)    expect: "> 0" filter: "STATUS = 'ACTIVE'"reject_policy: FAIL_ROW
04

Run the migration

terminal
docker compose up[pulsaride] Starting migration run #1[pulsaride] products      → 12 483 rows  ✓[pulsaride] orders        →  4 201 rows  ✓[pulsaride] order_items   → 18 902 rows  ✓[pulsaride] Migration complete in 4.2s[pulsaride] Dashboard: http://localhost:8080/pulsaride
Full quickstart guide →

How it compares

Against the tools teams typically choose — or build themselves.

FeaturePulsarideCustom Spring BatchTalendInformatica
Declarative YAML transform rules
Spring auto-configuration (zero @Bean)
FK-aware execution (depends_on deferral)Manual
CDC mode (Debezium/Kafka + circuit breaker)ManualAdd-onAdd-on
Dead-letter queue (per row, with context)LimitedPaid
Live monitoring dashboard (WebSocket)Paid
Data quality assertions (expect:)
Row-level diff on every runPaid add-on

Stop writing
BatchConfiguration.java.

That file doesn't need to exist. Describe the migration in YAML — field mappings, FK dependencies, data quality rules — and Pulsaride runs it, restarts it, and tells you when it's safe to cut over.

Read the Docs →View Example ↗Get in touch