Temporal + TigerBeetle + Kafka: A Stack for High-Fidelity Fintech
How we built a payment processing backbone that guarantees exactly-once semantics, double-entry accounting correctness, and full auditability — without a single distributed transaction.
The hardest problems in fintech engineering are not the ones that look hard. They're the ones that look like plumbing — until they fail at 2am with $400,000 in a liminal state between two ledgers, and no way to tell which side of the transaction is authoritative.
We've built payment systems on top of traditional RDBMS, on Kafka with bespoke idempotency logic, on event-sourced systems with hand-rolled projections. In 2024, for a payments infrastructure client, we built on a stack we believe is a step-change improvement: Temporal for orchestration, TigerBeetle for ledger accounting, and Kafka for event streaming. This is what we learned.
The problem with most payment stacks
Payment processing involves coordinating state across multiple systems — the ledger, the payment rails, the notification service, the fraud check, the reconciliation feed. Each step can fail. The network between steps can fail. The orchestrating service itself can fail.
Most systems handle this with one of two approaches:
The saga pattern distributes the transaction across services with compensating transactions for rollback. This is correct in theory. In practice, compensating transactions are hard to write, easy to get wrong, and become unmanageable as business logic grows.
The "just use a database transaction" approach works until you need to call an external API (payment rail, fraud vendor) from inside the transaction boundary — which breaks the atomicity guarantee immediately.
Neither approach gives you durable, inspectable, resumable orchestration that survives process failures without complex recovery logic.
Temporal for workflow orchestration
Temporal is a durable workflow engine. You write your payment processing logic as a regular function — call the fraud check, then debit the sender, then credit the receiver, then notify, then update the reconciliation feed. If any step fails, Temporal automatically retries from the failure point. If your entire service crashes mid-payment, Temporal resumes from where it left off when the service comes back up.
The key property: your workflow code is replayed deterministically on restart. Temporal stores the event history and replays it to reconstruct the in-memory state. This means you get durable orchestration without writing any recovery logic yourself.
For our client's use case — multi-step cross-border payments with external rail calls and regulatory reporting steps — this was transformative. A payment that previously required defensive coding at every step became a readable sequential function.
func ProcessPayment(ctx workflow.Context, req PaymentRequest) error {
fraudResult := workflow.ExecuteActivity(ctx, FraudCheck, req).Get(ctx)
if fraudResult.Declined { return ErrFraudDeclined }
workflow.ExecuteActivity(ctx, DebitSender, req).Get(ctx)
workflow.ExecuteActivity(ctx, CreditReceiver, req).Get(ctx)
workflow.ExecuteActivity(ctx, NotifyParties, req).Get(ctx)
workflow.ExecuteActivity(ctx, UpdateReconciliation, req).Get(ctx)
return nil
}
If the service crashes after DebitSender but before CreditReceiver, Temporal resumes exactly there. No bespoke recovery code. No saga compensation logic.
TigerBeetle for the ledger
TigerBeetle is a financial accounting database built for exactly-once, double-entry bookkeeping at high throughput. It's not a general-purpose database — it does one thing: it maintains a set of accounts and records transfers between them with atomic, idempotent semantics.
The key properties that made it right for our use case:
Idempotency by design. Every transfer has a UUID. If you submit the same transfer UUID twice, TigerBeetle processes it once and returns the same result. This eliminates the category of "double-charge" bugs that plague systems built on general-purpose databases where idempotency is bolted on.
Double-entry is enforced. TigerBeetle only accepts transfers that balance — debits must equal credits. You cannot create money or lose money at the data layer. This constraint, enforced by the database rather than by application code, catches a class of accounting bugs before they happen.
High throughput. TigerBeetle is designed for 1M+ transfers per second on commodity hardware. For our client's volumes (peak 40k transfers/hour), this was more than sufficient, but the headroom matters for burst tolerance.
"The first time we tried to submit an unbalanced transfer in testing, TigerBeetle rejected it immediately. Our previous ledger implementation would have accepted it and we'd have found the discrepancy in the nightly reconciliation run — if we were lucky."
Kafka as the event bus
Temporal and TigerBeetle handle orchestration and accounting respectively. Kafka handles the event stream that connects them to the rest of the system — fraud vendors, notification services, reporting pipelines, external rails.
We use Kafka for:
- Inbound payment requests from client applications (replayed on new consumer group registration for backfill)
- Outbound events from Temporal workflows (payment initiated, payment completed, payment failed) consumed by notification and analytics services
- Reconciliation feeds consumed by the finance team's reporting pipeline
One design decision worth noting: we don't use Kafka for the critical path inside a payment transaction. The Temporal → TigerBeetle calls are synchronous within the workflow. Kafka is for the non-critical fanout — the things that need to know about payments after they complete, not during.
What we didn't use
Distributed transactions (2PC/Saga). Temporal's durable execution model made these unnecessary. We have Temporal's event history as our recovery mechanism.
An ORM or general-purpose database for the ledger. Every time we've seen a payment system built on Postgres with application-level double-entry logic, we've found at least one category of bug that TigerBeetle's constraints would have prevented.
Exactly-once semantics from Kafka. We use Temporal's activity idempotency keys instead. Kafka exactly-once is complex to operate correctly; TigerBeetle's transfer idempotency achieves the outcome we need at the point that matters.
The production numbers
After three months in production:
- Zero double-charges (versus 2–3 per month on the previous system)
- Zero unbalanced ledger entries
- Payment workflow completion rate: 99.97% (remaining 0.03% were intentional fraud declines)
- Mean payment processing time: 1.4 seconds end-to-end including fraud check
- Recovery time after one unplanned service restart: 0 seconds — Temporal resumed all in-flight workflows automatically
Five things to take away
- Temporal's durable execution model eliminates an entire category of distributed transaction complexity — write your workflow as sequential code and let the engine handle failures.
- TigerBeetle's enforced double-entry and idempotency constraints prevent accounting bugs at the data layer, not the application layer.
- Use Kafka for fanout to non-critical consumers, not for the critical payment path itself.
- Idempotency keys should live as close to the ledger as possible — application-level idempotency is fragile.
- The best time to switch to a purpose-built ledger is before you have your first unexplained reconciliation discrepancy.