Skip to content

Use final spec values for splicing #2887

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 33 additions & 0 deletions docs/release-notes/eclair-vnext.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,39 @@

<insert changes>

### Channel Splicing

With this release, we add support for the final version of [splicing](https://github.com/lightning/bolts/pull/1160) that was recently added to the BOLTs.
Splicing allows node operators to change the size of their existing channels, which makes it easier and more efficient to allocate liquidity where it is most needed.
Most node operators can now have a single channel with each of their peer, which costs less on-chain fees and resources, and makes path-finding easier.

The size of an existing channel can be increased with the `splicein` API:

```sh
eclair-cli splicein --channelId=<channel_id> --amountIn=<amount_satoshis>
```

Once that transaction confirms, the additional liquidity can be used to send outgoing payments.
If the transaction doesn't confirm, the node operator can speed up confirmation with the `rbfsplice` API:

```sh
eclair-cli rbfsplice --channelId=<channel_id> --targetFeerateSatByte=<feerate_satoshis_per_byte> --fundingFeeBudgetSatoshis=<maximum_on_chain_fee_satoshis>
```

If the node operator wants to reduce the size of a channel, or send some of the channel funds to an on-chain address, they can use the `spliceout` API:

```sh
eclair-cli spliceout --channelId=<channel_id> --amountOut=<amount_satoshis> --scriptPubKey=<on_chain_address>
```

That operation can also be RBF-ed with the `rbfsplice` API to speed up confirmation if necessary.

Note that when 0-conf is used for the channel, it is not possible to RBF splice transactions.
Node operators should instead create a new splice transaction (with `splicein` or `spliceout`) to CPFP the previous transaction.

Note that eclair had already introduced support for a splicing prototype in v0.9.0, which helped improve the BOLT proposal.
We're removing support for the previous splicing prototype feature: users that depended on this protocol must upgrade to create official splice transactions.

### Package relay

With Bitcoin Core 28.1, eclair starts relying on the `submitpackage` RPC during channel force-close.
Expand Down
1 change: 1 addition & 0 deletions eclair-core/src/main/resources/reference.conf
Original file line number Diff line number Diff line change
Expand Up @@ -88,6 +88,7 @@ eclair {
option_zeroconf = disabled
keysend = disabled
option_simple_close=optional
option_splice = optional
trampoline_payment_prototype = disabled
async_payment_prototype = disabled
on_the_fly_funding = disabled
Expand Down
17 changes: 7 additions & 10 deletions eclair-core/src/main/scala/fr/acinq/eclair/Features.scala
Original file line number Diff line number Diff line change
Expand Up @@ -264,8 +264,7 @@ object Features {
val mandatory = 28
}

// TODO: this should also extend NodeFeature once the spec is finalized
case object Quiescence extends Feature with InitFeature {
case object Quiescence extends Feature with InitFeature with NodeFeature {
val rfcName = "option_quiesce"
val mandatory = 34
}
Expand Down Expand Up @@ -314,6 +313,11 @@ object Features {
val mandatory = 60
}

case object Splicing extends Feature with InitFeature with NodeFeature {
val rfcName = "option_splice"
val mandatory = 62
}

/** This feature bit indicates that the node is a mobile wallet that can be woken up via push notifications. */
case object WakeUpNotificationClient extends Feature with InitFeature {
val rfcName = "wake_up_notification_client"
Expand All @@ -337,12 +341,6 @@ object Features {
val mandatory = 152
}

// TODO: @pm47 custom splices implementation for phoenix, to be replaced once splices is spec-ed (currently reserved here: https://github.com/lightning/bolts/issues/605)
case object SplicePrototype extends Feature with InitFeature {
val rfcName = "splice_prototype"
val mandatory = 154
}

/**
* Activate this feature to provide on-the-fly funding to remote nodes, as specified in bLIP 36: https://github.com/lightning/blips/blob/master/blip-0036.md.
* TODO: add NodeFeature once bLIP is merged.
Expand Down Expand Up @@ -386,10 +384,10 @@ object Features {
ZeroConf,
KeySend,
SimpleClose,
Splicing,
WakeUpNotificationClient,
TrampolinePaymentPrototype,
AsyncPaymentPrototype,
SplicePrototype,
OnTheFlyFunding,
FundingFeeCredit
)
Expand All @@ -406,7 +404,6 @@ object Features {
KeySend -> (VariableLengthOnion :: Nil),
SimpleClose -> (ShutdownAnySegwit :: Nil),
AsyncPaymentPrototype -> (TrampolinePaymentPrototype :: Nil),
OnTheFlyFunding -> (SplicePrototype :: Nil),
FundingFeeCredit -> (OnTheFlyFunding :: Nil)
)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -658,7 +658,8 @@ case class Commitment(fundingTxIndex: Long,
Metrics.recordHtlcsInFlight(spec, remoteCommit.spec)

val tlvs = Set(
if (batchSize > 1) Some(CommitSigTlv.BatchTlv(batchSize)) else None
if (batchSize > 1) Some(CommitSigTlv.FundingTx(fundingTxId)) else None,
if (batchSize > 1) Some(CommitSigTlv.ExperimentalBatchTlv(batchSize)) else None,
).flatten[CommitSigTlv]
val commitSig = params.commitmentFormat match {
case _: SegwitV0CommitmentFormat =>
Expand Down Expand Up @@ -1042,8 +1043,10 @@ case class Commitments(params: ChannelParams,
case commitSig: CommitSig => Seq(commitSig)
}
val commitKeys = LocalCommitmentKeys(params, channelKeys, localCommitIndex + 1)
// Signatures are sent in order (most recent first), calling `zip` will drop trailing sigs that are for deactivated/pruned commitments.
val active1 = active.zip(sigs).map { case (commitment, commit) =>
val active1 = active.zipWithIndex.map { case (commitment, idx) =>
// If the funding_txid isn't provided, we assume that signatures are sent in order (most recent first).
// This matches the behavior of peers who only support the experimental version of splicing.
val commit = sigs.find(_.fundingTxId_opt.contains(commitment.fundingTxId)).getOrElse(sigs(idx))
commitment.receiveCommit(params, channelKeys, commitKeys, changes, commit) match {
case Left(f) => return Left(f)
case Right(commitment1) => commitment1
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -876,7 +876,7 @@ class Channel(val nodeParams: NodeParams, val channelKeys: ChannelKeys, val wall
}

case Event(cmd: CMD_SPLICE, d: DATA_NORMAL) =>
if (!d.commitments.params.remoteParams.initFeatures.hasFeature(Features.SplicePrototype)) {
if (!d.commitments.params.remoteParams.initFeatures.hasFeature(Features.Splicing)) {
log.warning("cannot initiate splice, peer doesn't support splicing")
cmd.replyTo ! RES_FAILURE(cmd, CommandUnavailableInThisState(d.channelId, "splice", NORMAL))
stay()
Expand Down Expand Up @@ -2307,7 +2307,8 @@ class Channel(val nodeParams: NodeParams, val channelKeys: ChannelKeys, val wall
}
case _ => Set.empty
}
val lastFundingLockedTlvs: Set[ChannelReestablishTlv] = if (d.commitments.params.remoteParams.initFeatures.hasFeature(Features.SplicePrototype)) {
val remoteFeatures = d.commitments.params.remoteParams.initFeatures
val lastFundingLockedTlvs: Set[ChannelReestablishTlv] = if (remoteFeatures.hasFeature(Features.Splicing) || remoteFeatures.unknown.contains(UnknownFeature(154)) || remoteFeatures.unknown.contains(UnknownFeature(155))) {
d.commitments.lastLocalLocked_opt.map(c => ChannelReestablishTlv.MyCurrentFundingLockedTlv(c.fundingTxId)).toSet ++
d.commitments.lastRemoteLocked_opt.map(c => ChannelReestablishTlv.YourLastFundingLockedTlv(c.fundingTxId)).toSet
} else Set.empty
Expand Down Expand Up @@ -2436,7 +2437,8 @@ class Channel(val nodeParams: NodeParams, val channelKeys: ChannelKeys, val wall
// We only send channel_ready for initial funding transactions.
case Some(c) if c.fundingTxIndex != 0 => ()
case Some(c) =>
val remoteSpliceSupport = d.commitments.params.remoteParams.initFeatures.hasFeature(Features.SplicePrototype)
val remoteFeatures = d.commitments.params.remoteParams.initFeatures
val remoteSpliceSupport = remoteFeatures.hasFeature(Features.Splicing) || remoteFeatures.unknown.contains(UnknownFeature(154)) || remoteFeatures.unknown.contains(UnknownFeature(155))
// If our peer has not received our channel_ready, we retransmit it.
val notReceivedByRemote = remoteSpliceSupport && channelReestablish.yourLastFundingLocked_opt.isEmpty
// If next_local_commitment_number is 1 in both the channel_reestablish it sent and received, then the node
Expand Down
62 changes: 58 additions & 4 deletions eclair-core/src/main/scala/fr/acinq/eclair/io/PeerConnection.scala
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ import fr.acinq.eclair.remote.EclairInternalsSerializer.RemoteTypes
import fr.acinq.eclair.router.Router._
import fr.acinq.eclair.wire.protocol
import fr.acinq.eclair.wire.protocol._
import fr.acinq.eclair.{FSMDiagnosticActorLogging, Features, InitFeature, Logs, TimestampMilli, TimestampSecond}
import fr.acinq.eclair.{FSMDiagnosticActorLogging, Features, InitFeature, Logs, TimestampMilli, TimestampSecond, UnknownFeature}
import scodec.Attempt
import scodec.bits.ByteVector

Expand Down Expand Up @@ -206,9 +206,19 @@ class PeerConnection(keyPair: KeyPair, conf: PeerConnection.Conf, switchboard: A
stay()

case Event(msg: LightningMessage, d: ConnectedData) if sender() != d.transport => // if the message doesn't originate from the transport, it is an outgoing message
val useExperimentalSplice = d.remoteInit.features.unknown.contains(UnknownFeature(154)) || d.remoteInit.features.unknown.contains(UnknownFeature(155))
msg match {
case batch: CommitSigBatch => batch.messages.foreach(msg => d.transport forward msg)
case msg => d.transport forward msg
// If our peer is using the experimental splice version, we convert splice messages.
case msg: SpliceInit if useExperimentalSplice => d.transport forward ExperimentalSpliceInit.from(msg)
case msg: SpliceAck if useExperimentalSplice => d.transport forward ExperimentalSpliceAck.from(msg)
case msg: SpliceLocked if useExperimentalSplice => d.transport forward ExperimentalSpliceLocked.from(msg)
case msg: TxAddInput if useExperimentalSplice => d.transport forward msg.copy(tlvStream = TlvStream(msg.tlvStream.records.filterNot(_.isInstanceOf[TxAddInputTlv.SharedInputTxId])))
case msg: TxSignatures if useExperimentalSplice => d.transport forward msg.copy(tlvStream = TlvStream(msg.tlvStream.records.filterNot(_.isInstanceOf[TxSignaturesTlv.PreviousFundingTxSig])))
case batch: CommitSigBatch if useExperimentalSplice => batch.messages.foreach(msg => d.transport forward msg.copy(tlvStream = TlvStream(msg.tlvStream.records.filterNot(_.isInstanceOf[CommitSigTlv.FundingTx]))))
case batch: CommitSigBatch =>
d.transport forward StartBatch(batch.channelId, batch.batchSize)
batch.messages.foreach(msg => d.transport forward msg.copy(tlvStream = TlvStream(msg.tlvStream.records.filterNot(_.isInstanceOf[CommitSigTlv.ExperimentalBatchTlv]))))
case _ => d.transport forward msg
}
msg match {
// If we send any channel management message to this peer, the connection should be persistent.
Expand Down Expand Up @@ -348,8 +358,41 @@ class PeerConnection(keyPair: KeyPair, conf: PeerConnection.Conf, switchboard: A
// We immediately forward messages to the peer, unless they are part of a batch, in which case we wait to
// receive the whole batch before forwarding.
msg match {
case msg: StartBatch =>
log.debug("starting batch of size {} for channel_id={}", msg.batchSize, msg.channelId)
d.commitSigBatch_opt match {
case Some(pending) if pending.received.nonEmpty =>
log.warning("starting batch with incomplete previous batch ({}/{} received)", pending.received.size, pending.batchSize)
// This is a spec violation from our peer: this will likely lead to a force-close.
d.transport ! Warning(msg.channelId, "invalid start_batch message: the previous batch is not done yet")
d.peer ! CommitSigBatch(pending.received)
case _ => ()
}
stay() using d.copy(commitSigBatch_opt = Some(PendingCommitSigBatch(msg.channelId, msg.batchSize, Nil)))
case msg: HasChannelId if d.commitSigBatch_opt.nonEmpty =>
// We only support batches of commit_sig messages: other messages will simply be relayed individually.
val pending = d.commitSigBatch_opt.get
msg match {
case msg: CommitSig if msg.channelId == pending.channelId =>
val received1 = pending.received :+ msg
if (received1.size == pending.batchSize) {
log.debug("received last commit_sig in batch for channel_id={}", msg.channelId)
d.peer ! CommitSigBatch(received1)
stay() using d.copy(commitSigBatch_opt = None)
} else {
log.debug("received commit_sig {}/{} in batch for channel_id={}", received1.size, pending.batchSize, msg.channelId)
stay() using d.copy(commitSigBatch_opt = Some(pending.copy(received = received1)))
}
case _ =>
log.warning("received {} as part of a batch: we don't support batching that kind of messages", msg.getClass.getSimpleName)
if (pending.received.nonEmpty) d.peer ! CommitSigBatch(pending.received)
d.peer ! msg
stay() using d.copy(commitSigBatch_opt = None)
}
case msg: CommitSig =>
msg.tlvStream.get[CommitSigTlv.BatchTlv].map(_.size) match {
// We keep supporting the experimental version of splicing that older Phoenix wallets use.
// Once we're confident that enough Phoenix users have upgraded, we should remove this branch.
msg.tlvStream.get[CommitSigTlv.ExperimentalBatchTlv].map(_.size) match {
case Some(batchSize) if batchSize > 25 =>
log.warning("received legacy batch of commit_sig exceeding our threshold ({} > 25), processing messages individually", batchSize)
// We don't want peers to be able to exhaust our memory by sending batches of dummy messages that we keep in RAM.
Expand Down Expand Up @@ -381,6 +424,16 @@ class PeerConnection(keyPair: KeyPair, conf: PeerConnection.Conf, switchboard: A
d.peer ! msg
stay()
}
// If our peer is using the experimental splice version, we convert splice messages.
case msg: ExperimentalSpliceInit =>
d.peer ! msg.toSpliceInit()
stay()
case msg: ExperimentalSpliceAck =>
d.peer ! msg.toSpliceAck()
stay()
case msg: ExperimentalSpliceLocked =>
d.peer ! msg.toSpliceLocked()
stay()
case _ =>
d.peer ! msg
stay()
Expand Down Expand Up @@ -613,6 +666,7 @@ object PeerConnection {
gossipTimestampFilter: Option[GossipTimestampFilter] = None,
behavior: Behavior = Behavior(),
expectedPong_opt: Option[ExpectedPong] = None,
commitSigBatch_opt: Option[PendingCommitSigBatch] = None,
legacyCommitSigBatch_opt: Option[PendingCommitSigBatch] = None,
isPersistent: Boolean) extends Data with HasTransport

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -301,3 +301,9 @@ object ClosingTlv {
)

}

sealed trait StartBatchTlv extends Tlv

object StartBatchTlv {
val startBatchTlvCodec: Codec[TlvStream[StartBatchTlv]] = tlvStream(discriminated[StartBatchTlv].by(varint))
}
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@
package fr.acinq.eclair.wire.protocol

import fr.acinq.bitcoin.scalacompat.Crypto.PublicKey
import fr.acinq.bitcoin.scalacompat.TxId
import fr.acinq.eclair.UInt64
import fr.acinq.eclair.crypto.Sphinx
import fr.acinq.eclair.wire.protocol.CommonCodecs._
Expand Down Expand Up @@ -80,16 +81,27 @@ object UpdateFailMalformedHtlcTlv {
sealed trait CommitSigTlv extends Tlv

object CommitSigTlv {
/**
* While a splice is ongoing and not locked, we have multiple valid commitments.
* We send one [[CommitSig]] message for each valid commitment.
*
* @param txId the funding transaction spent by this commitment.
*/
case class FundingTx(txId: TxId) extends CommitSigTlv

/** @param size the number of [[CommitSig]] messages in the batch */
case class BatchTlv(size: Int) extends CommitSigTlv
private val fundingTxTlv: Codec[FundingTx] = tlvField(txIdAsHash)

object BatchTlv {
val codec: Codec[BatchTlv] = tlvField(tu16)
}
/**
* The experimental version of splicing included the number of [[CommitSig]] messages in the batch.
* This TLV can be removed once Phoenix users have upgraded to the official version of splicing.
*/
case class ExperimentalBatchTlv(size: Int) extends CommitSigTlv

private val experimentalBatchTlv: Codec[ExperimentalBatchTlv] = tlvField(tu16)

val commitSigTlvCodec: Codec[TlvStream[CommitSigTlv]] = tlvStream(discriminated[CommitSigTlv].by(varint)
.typecase(UInt64(0x47010005), BatchTlv.codec)
.typecase(UInt64(0), fundingTxTlv)
.typecase(UInt64(0x47010005), experimentalBatchTlv)
)

}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,9 +33,13 @@ object TxAddInputTlv {
/** When doing a splice, the initiator must provide the previous funding txId instead of the whole transaction. */
case class SharedInputTxId(txId: TxId) extends TxAddInputTlv

/** Same as [[SharedInputTxId]] for peers who only support the experimental version of splicing. */
case class ExperimentalSharedInputTxId(txId: TxId) extends TxAddInputTlv

val txAddInputTlvCodec: Codec[TlvStream[TxAddInputTlv]] = tlvStream(discriminated[TxAddInputTlv].by(varint)
// Note that we actually encode as a tx_hash to be consistent with other lightning messages.
.typecase(UInt64(1105), tlvField(txIdAsHash.as[SharedInputTxId]))
.typecase(UInt64(0), tlvField(txIdAsHash.as[SharedInputTxId]))
.typecase(UInt64(1105), tlvField(txIdAsHash.as[ExperimentalSharedInputTxId]))
)
}

Expand Down Expand Up @@ -69,8 +73,12 @@ object TxSignaturesTlv {
/** When doing a splice, each peer must provide their signature for the previous 2-of-2 funding output. */
case class PreviousFundingTxSig(sig: ByteVector64) extends TxSignaturesTlv

/** Same as [[PreviousFundingTxSig]] for peers who only support the experimental version of splicing. */
case class ExperimentalPreviousFundingTxSig(sig: ByteVector64) extends TxSignaturesTlv

val txSignaturesTlvCodec: Codec[TlvStream[TxSignaturesTlv]] = tlvStream(discriminated[TxSignaturesTlv].by(varint)
.typecase(UInt64(601), tlvField(bytes64.as[PreviousFundingTxSig]))
.typecase(UInt64(0), tlvField(bytes64.as[PreviousFundingTxSig]))
.typecase(UInt64(601), tlvField(bytes64.as[ExperimentalPreviousFundingTxSig]))
)
}

Expand Down
Loading