Skip to content

trampoline: accumulate inbound trampoline htlcs#4493

Draft
carlaKC wants to merge 25 commits intolightningdevkit:mainfrom
carlaKC:2299-mpp-accumulation
Draft

trampoline: accumulate inbound trampoline htlcs#4493
carlaKC wants to merge 25 commits intolightningdevkit:mainfrom
carlaKC:2299-mpp-accumulation

Conversation

@carlaKC
Copy link
Contributor

@carlaKC carlaKC commented Mar 18, 2026

This PR handles accumulation of inbound MPP trampoline parts, including handling of timeout and MPP validation. When all parts are successfully accumulated, we'll fail the MPP set backwards as we do not yet have support for outbound dispatch.

It does not include:

  • Handling trampoline replays / reload from disk (we currently refuse to read HTLCSource::TrampolineForward to prevent downgrade with trampoline in flight).
  • Interception of trampoline forwards, which I think we should add a separate flag for because it's difficult to map to our existing structure when we don't know the outbound channel at time of interception.

A few PR notes:

  • There's quite a lot of refactoring here, because a lot of the work added to support trampoline receives didn't consider the mpp forwarding case.
  • Happy to pull the refactoring out into a separate PR, there's a lot of mechanical stuff here that I think could be separated out easily.

carlaKC added 18 commits March 18, 2026 12:53
We don't need to track a single trampoline secret in our HTLCSource
because this is already tracked in each of our previous hops contained
in the source. This field was unnecessarily added under the belief that
each inner trampoline onion we receive for inbound MPP trampoline would
have the same session key, and can be removed with breaking changes to
persistence because we have not yet released a version with the old
serialization - we currently refuse to decode trampoline forwards, and
will not read HTLCSource::Trampoline to prevent downgrades.
When we receive a trampoline forward, we need to wait for MPP parts to
arrive at our node before we can forward the outgoing payment onwards.
This commit threads this information through to our pending htlc struct
which we'll use to validate the parts we receive.
For regular blinded forwards, it's okay to use the amount in our
update_add_htlc to calculate the amount that we need to foward onwards
because we're only expecting on HTLC in and one HTLC out.

For blinded trampoline forwards, it's possible that we have multiple
incoming HTLCs that need to accumulate at our node that make our total
incoming amount from which we'll calculate the amount that we need to
forward onwards to the next trampoline. This commit updates our next
trampoline amount calculation to use the total intended incoming amount
for the payment so we can correctly calculate our next trampoline's
amount.

`decode_incoming_update_add_htlc_onion` is left unchanged because
the call to `check_blinded` will be removed in upcoming commits.
When we are a trampoline node receiving an incoming HTLC, we need access
to our outer onion's amount_to_forward to check that we have been
forwarded the correct amount. We can't use the amount in the inner
onion, because that contains our fee budget - somebody could forward us
less than we were intended to receive, and provided it is within the
trampoline fee budget we wouldn't know.

In this commit we set our outer onion values in PendingHTLCInfo to
perform this validation properly. In the commit that follows, we'll
start tracking our expected trampoline values in trampoline-specific
routing info.
When we're forwarding a trampoline payment, we need to remember the
amount and CLTV that the next trampoline is expecting.
When we receive trampoline payments, we first want to validate the
values in our outer onion to ensure that we've been given the amount/
expiry that the sender was intending us to receive to make sure that
forwarding nodes haven't sent us less than they should.
We'll re-use this logic to timeout tick incoming trampoline MPP.
We're going to need to keep track of our trampoline HTLCs in the same
way that we keep track of incoming MPP payment to allow them to
accumulate on our incoming channel before forwarding them onwards to
the outgoing channel. To do this we'll need to store the payload
values we need to remember for forwarding in OnionPayload.
When we are a trampoline router, we need to accumulate incoming HTLCs
(if MPP is used) before forwarding the trampoline-routed outgoing
HTLC(s). This commit adds a new map in channel manager, and mimics the
handling done for claimable_payments.

We will rely on our pending_outbound_payments (which will contain a
dispatched payment for trampoline forwards) for completing MPP claims,
and do not want to surface `PaymentClaimable` events for trampoline,
so do not need to have pending_claiming_payments like we have for MPP
receives. As handling will be different, we track trampoline MPP parts
in a separate map to `claimable_payments`.
We're going to use the same logic for trampoline and for incoming MPP
payments, so we pull this out into a separate function.
We'll only use this for non-trampoline incoming accumulated htlcs,
because we want different source/failure for trampoline.
Add our MPP accumulation logic for trampoline payments, but reject
them when they fully arrive. This allows us to test parts of our
trampoline flow without fully enabling it.

This commit keeps the same committed_to_claimable debug_assert behavior
as MPP claims, asserting that we do not fail our
check_claimable_incoming_htlc merge for the first HTLC that we add to a
set. This assert could also be hit if the intended amount exceeds
`MAX_VALUE_MSAT`, but we can't hit this in practice.
If we're a trampoline node and received an error from downstream that
we can't fully decrypt, we want to double-wrap it for the original
sender. Previously not implemented because we'd only focused on
receives, where there's no possibility of a downstream error.

While proper error handling will be added in a followup, we add the
bare minimum required here for testing.
While proper error handling will be added in a followup, we add the
bare minimum required here for testing.
To use helper functions for either trampoline or regular paths.
@ldk-reviews-bot
Copy link

👋 Hi! I see this is a draft PR.
I'll wait to assign reviewers until you mark it as ready for review.
Just convert it out of draft status when you're ready for review!

SpontaneousPayment(PaymentPreimage),
/// HTLCs terminating at our node are intended for forwarding onwards as a trampoline
/// forward.
Trampoline {},
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't idea because we'll never actually surface it on the API, but we currently use the external type internally - didn't seem worth a refactor, but can do if others think so!

carlaKC added 4 commits March 18, 2026 13:35
For trampoline payments, we don't want to enforce a minimum cltv delta
between our incoming and outer onion outgoing CLTV because we'll
calculate our delta from the inner trampoline onion's value. However,
we still want to check that we get at least the CLTV that the sending
node intended for us and we still want to validate our incoming value.
Refactor to allow setting a zero delta, for use for trampoline payments.
We can't perform proper validation because we don't know the outgoing
channel id until we forward the HTLC, so we just perform a basic CLTV
check.

Now that we've got rejection on inbound MPP accumulation, we relax this
check to allow testing of inbound MPP trampoline processing.
To create trampoline forwarding and single hop receiving tails.
@carlaKC carlaKC force-pushed the 2299-mpp-accumulation branch from 2e6c2cb to 2f01cdc Compare March 18, 2026 17:37
carlaKC added 3 commits March 18, 2026 13:50
Will be used in the commit that follows to create ClaimableHTLC in
tests tests, so that we don't have to bump every field to pub(crate).
@carlaKC carlaKC force-pushed the 2299-mpp-accumulation branch from 2f01cdc to 9d17783 Compare March 18, 2026 17:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants