The problem with custom BatchConfiguration.java
Every Java team has one. It started at 50 lines. It's 300 now. It's not the migration — it's everything around it.
Happy-path code only
Your BatchConfig.java covers the normal case. The FK violation at row 847, the network timeout at 2 am, the re-run that doubled counts — those were handled in Slack, not in code.
No clean restart story
Ask a teammate: if the process dies at row 50,000, does it resume from the checkpoint or start over? If they have to read the code to answer, you don't have a restart story.
FK ordering is your responsibility
You hand-wrote the migration step order based on your schema knowledge. When the schema changes, that order needs to change too — and there is nothing to enforce it.
Auto-configuration replaces the boilerplate
The 3 Java files that remain: App.java, DataSourceConfig.java, MigrationRunner.java — 35 lines total.
Everything the glue code was supposed to do
The capabilities your hand-rolled BatchConfig.java never quite had.
Full Load & CDC
SIMPLE for small tables, partitioned SCALE for large ones. AUTO selects based on row count. CDC mode uses Debezium with per-table circuit breakers and auto-replay.
FK-Aware Execution
depends_on defers rows when their FK target hasn't arrived yet. Deferred rows replay automatically on dependency completion. No step ordering, no silent drops.
Spring Auto-Configuration
BatchAutoConfiguration is excluded automatically. MigrationEngine activates from application.yml. Zero @Bean definitions in your project.
Live Monitoring
WebSocket dashboard at :8080/pulsaride. Run log, DLQ depth, per-table progress, alert thresholds. Spring Actuator health at /actuator/health/pulsaride.
Data Quality
expect: assertions on any field. Failed rows land in pulsaride_dlq with the source payload and rejection reason. Row-level diff report on every run.
Restartable by Default
Every run checkpoints at the batch level. A crash at row 50,000 resumes from the last committed batch — not from row zero. Config drift detection blocks unsafe re-runs.
A real migration, ready to clone
example-csv loads a CSV file into a PostgreSQL staging table, then Pulsaride transforms and validates 30 clean rows into the target — intentionally rejecting 4 rows with DQ failures to the DLQ. Three Java files. Zero business logic.
migration/products.yamlTransform + DQ rulessrc/.../application.ymlZero-code configschema/01-init.sqlStaging + target DDLdata/products.csv30 clean + 4 DQ failuresThe complete migration rules file for this example:
From zero to a running migration
One Maven dependency. Two YAML files. docker compose up. Done.
Add the Maven dependency to your pom.xml
Configure your database connections
Declare your first table mapping
Run the migration
How it compares
Against the tools teams typically choose — or build themselves.
| Feature | Pulsaride | Custom Spring Batch | Talend | Informatica |
|---|---|---|---|---|
| Declarative YAML transform rules | ✓ | — | — | — |
| Spring auto-configuration (zero @Bean) | ✓ | — | — | — |
| FK-aware execution (depends_on deferral) | ✓ | Manual | — | — |
| CDC mode (Debezium/Kafka + circuit breaker) | ✓ | Manual | Add-on | Add-on |
| Dead-letter queue (per row, with context) | ✓ | — | Limited | Paid |
| Live monitoring dashboard (WebSocket) | ✓ | — | — | Paid |
| Data quality assertions (expect:) | ✓ | — | — | — |
| Row-level diff on every run | ✓ | — | — | Paid add-on |
Stop writing
BatchConfiguration.java.
That file doesn't need to exist. Describe the migration in YAML — field mappings, FK dependencies, data quality rules — and Pulsaride runs it, restarts it, and tells you when it's safe to cut over.