-
After the 1.0 release, several versions of CCF will have to be compatibility. This can be because:
I will write a new comment here to discuss each aspect of compatibility across different versions (in no particular order). |
Beta Was this translation helpful? Give feedback.
Replies: 5 comments 1 reply
-
Enclave-host compatibility As per #1307, we will opt for the strict solution: the host and enclave need to be from the same release. The only drawback I can think of is producing a hot-fix for the host only, without requiring the operators/members to re-build their applications, vote for a new code ID, etc. Perhaps we could add a flag on the host to skip this check for these kinds of situation? |
Beta Was this translation helpful? Give feedback.
-
Ledger format At first, I thought we could stick a version number in each ledger chunk but 1) it wouldn't be integrity protected and 2) nodes from different versions may write to the same ledger chunk (during a code update). I believe it couldn't be in a signature transaction either as signatures are at the end of the transaction block they cover. I suggest the sensible option is to stick a version number at the start of each ledger frame (format described here) to indicate how to deserialise the entry (backwards compatibility). If a new node replicates new ledger entries to an old node (i.e. the old node doesn't know the format of new ledger entries), I suggest that the old node should respond with an error? In other words, we don't plan for forward compatibility as these scenarios are supposed to be transient (i.e. code update). |
Beta Was this translation helpful? Give feedback.
-
Built-in tables schema I'm assuming that all built-in tables, except a few, are serialised a JSON (#1993). For backwards compatibility, only news fields should be added to the value type, as optional fields (with a sensible default value) so that historical ledger entries can be deserialised successfully. There may be round-trip data loss issue if we have two nodes (one old, one new) in the same service, as the old node could drop updates to that key (see snippet below). Snippet (click to expand)struct CustomClass
{
std::string s;
size_t n;
};
DECLARE_JSON_TYPE(CustomClass);
DECLARE_JSON_REQUIRED_FIELDS(CustomClass, s, n);
struct CustomClassV2
{
std::string s;
size_t n;
bool b = false; // New field: needs to be added as optional
};
DECLARE_JSON_TYPE_WITH_OPTIONAL_FIELDS(CustomClassV2);
DECLARE_JSON_REQUIRED_FIELDS(CustomClassV2, s, n);
DECLARE_JSON_OPTIONAL_FIELDS(CustomClassV2, b);
...
auto consensus_v1 = std::make_shared<kv::StubConsensus>();
auto consensus_v2 = std::make_shared<kv::StubConsensus>();
kv::Store kv_store_v1(consensus_v1);
kv::Store kv_store_v2(consensus_v2);
using DefaultSerialisedMapV1 = kv::JsonSerialisedMap<size_t, CustomClass>;
DefaultSerialisedMapV1 map("public:map");
using DefaultSerialisedMapV2 = kv::JsonSerialisedMap<size_t, CustomClassV2>;
DefaultSerialisedMapV2 mapv2("public:map"); // Note: map would have same name
// Issue transaction on old node
{
auto tx = kv_store_v1.create_tx();
auto handle = tx.rw(map);
handle->put(0, {"value", 1});
tx.commit();
}
// Deserialise transaction on new node and update value
{
REQUIRE(
kv_store_v2
.apply(consensus_v1->get_latest_data().value(), ConsensusType::CFT)
->execute() == kv::ApplySuccess::PASS);
auto tx = kv_store_v2.create_tx();
auto handle = tx.rw(mapv2);
// First issue: if key type has changed and doesn't hash to same key!
auto value = handle->get(0);
REQUIRE(value.has_value());
REQUIRE(value->s == "value");
REQUIRE(value->n == 1);
// If a field is added to the value, the deserialisation will succeed
// Note that the added field should have a default value to avoid ambiguity
REQUIRE(value->b == false);
value->b = true;
handle->put(0, value.value());
tx.commit();
}
// Deserialise updated value on old node, and update again
{
REQUIRE(
kv_store_v1
.apply(consensus_v2->get_latest_data().value(), ConsensusType::CFT)
->execute() == kv::ApplySuccess::PASS);
auto tx = kv_store_v1.create_tx();
auto handle = tx.rw(map);
auto value = handle->get(0);
REQUIRE(value.has_value());
REQUIRE(value->s == "value");
REQUIRE(value->n == 1);
// REQUIRE(value->b == true); // Doesn't compile
value->n++;
handle->put(0, value.value());
tx.commit();
}
// Finally, deserialise updated value on new node: previous update has
// disappeared
{
REQUIRE(
kv_store_v2
.apply(consensus_v1->get_latest_data().value(), ConsensusType::CFT)
->execute() == kv::ApplySuccess::PASS);
auto tx = kv_store_v2.create_tx();
auto handle = tx.ro(mapv2);
auto value = handle->get(0);
REQUIRE(value.has_value());
REQUIRE(value->s == "value");
REQUIRE(value->n == 2);
// REQUIRE(
// value->b ==
// true); // Fails - the value of b was lost during the round trip
}
The same applies to new fields added to the key type so that the serialised old key can still be retrieved by the new node. However, when a new field is added and the key updated, we'll end up with the key being duplicated in the store (see snippet below). This may be fine for now as most of CCF's built-in tables are keyed by simple types (e.g. Snippet (click to expand) // Uses the same maps as in the previous example
// Issue transaction on old node
{
auto tx = kv_store_v1.create_tx();
auto handle = tx.rw(map);
CustomClass key = {"key", 1};
handle->put(key, 0);
tx.commit();
}
// Deserialise transaction on new node and update value
{
REQUIRE(
kv_store_v2
.apply(consensus_v1->get_latest_data().value(), ConsensusType::CFT)
->execute() == kv::ApplySuccess::PASS);
auto tx = kv_store_v2.create_tx();
auto handle = tx.rw(mapv2);
CustomClassV2 key = {"key", 1};
auto value = handle->get(key);
REQUIRE(value.has_value());
// Update key
key.b = true;
handle->put(key, 1);
tx.commit();
}
// We now have two keys!
{
auto tx_ = kv_store_v2.create_tx();
auto handle_ = tx_.rw(mapv2);
REQUIRE(handle_->get(key) == 1);
CustomClassV2 old_key = {"key", 1};
REQUIRE(handle_->has(old_key));
} |
Beta Was this translation helpful? Give feedback.
-
How to test for backward compatibility? We could have a CI job that pulls the latest LTS release(s), runs a service for a little while, performs a code upgrade to nodes built from the latest commit, then recovers the ledger from a brand new service, issuing historical queries, etc. |
Beta Was this translation helpful? Give feedback.
-
After some discussion, follow-up issues have been raised. See https://github.com/microsoft/CCF/labels/versioning |
Beta Was this translation helpful? Give feedback.
After some discussion, follow-up issues have been raised. See https://github.com/microsoft/CCF/labels/versioning