\ No newline at end of file
diff --git a/developers/api-reference/app-registrations-api/index.html b/developers/api-reference/app-registrations-api/index.html
index 7b9251048..692085781 100644
--- a/developers/api-reference/app-registrations-api/index.html
+++ b/developers/api-reference/app-registrations-api/index.html
@@ -1,4 +1,4 @@
-App Registrations API - DataTrails
+App Registrations API - DataTrails
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
@@ -137,7 +137,7 @@
],
"next_page_token": "eyJvcmlnX3JlcSI6eyJwYWdlX3NpemUiOjJ9LCJza2lwIjoyfQ=="
}
Response Parameter
Type
Description
applications
array
Describes a single application used for machine authentication
next_page_token
string
Pagination token. Empty on first request. On subsequent requests copied from response.
Responses
Description
200
A successful response.
401
Returned when the user is not authenticated to the system.
403
Returned when the user is not authorized.
429
Returned when a user exceeds their subscription’s rate limit for requests.
Description: Registers a new application, generating a client ID and secret for use in machine authentication. Regenerates the client secret for the application matching the supplied UUID. The response will include the client secret, but it will not be possible to retrieve it afterwards.
null
Parameter
Type
Description
custom_claims
object
Custom claims to add to Application for use in access policies.
display_name
string
Human-readable display name for this Application.
null
Parameter
Type
Description
custom_claims
object
Custom claims to add to Application for use in access policies.
Description: Regenerates the client secret for the application matching the supplied UUID. The response will include the client secret, but it will not be possible to retrieve it afterwards.
Response Parameter
Type
Description
client_id
string
Client ID for use in OIDC client credentials flow
credentials
array
Describes a single time-limited secret
custom_claims
object
Custom claims to add to Application for use in access policies.
display_name
string
Human-readable display name for this Application.
identity
string
Resource name for the application
roles
array
tenant_id
string
Identity of the tenant owning this application
Responses
Description
200
A successful response.
401
Returned when the user is not authenticated to the system.
403
Returned when the user is not authorized.
404
Returned when the Application does not exist.
429
Returned when a user exceeds their subscription’s rate limit for requests.
Description: Regenerates the client secret for the application matching the supplied UUID. The response will include the client secret, but it will not be possible to retrieve it afterwards.
\ No newline at end of file
diff --git a/developers/api-reference/assets-api/index.html b/developers/api-reference/assets-api/index.html
index 0fb375d71..371801cd8 100644
--- a/developers/api-reference/assets-api/index.html
+++ b/developers/api-reference/assets-api/index.html
@@ -1,4 +1,4 @@
-Assets API - DataTrails
+Assets API - DataTrails
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
@@ -513,4 +513,4 @@
}
Response Parameter
Type
Description
asset_attributes
object
key value mapping of asset attributes
asset_identity
string
identity of a related asset resource assets/11bf5b37-e0b8-42e0-8dcf-dc8c4aefc000
behaviour
string
The behaviour used to create event. RecordEvidence
block_number
string
number of block event was commited on
confirmation_status
string
indicates if the event has been succesfully committed to the blockchain
event_attributes
object
key value mapping of event attributes
from
string
wallet address for the creator of this event
identity
string
identity of a event resource
merklelog_entry
object
verifiable merkle mmr log entry details
operation
string
The operation represented by the event. Record
principal_accepted
object
principal recorded by the server
principal_declared
object
principal provided by the user
tenant_identity
string
Identity of the tenant the that created this event
timestamp_accepted
string
time of event as recorded by the server
timestamp_committed
string
time of event as recorded in verifiable storage
timestamp_declared
string
time of event as declared by the user
transaction_id
string
hash of the transaction as a hex string 0x11bf5b37e0b842e08dcfdc8c4aefc000
transaction_index
string
index of event within commited block
Responses
Description
200
A successful response.
401
Returned when the user is not authenticated to the system.
402
Returned when the user’s quota of Events has been reached.
429
Returned when a user exceeds their subscription’s rate limit for requests.
\ No newline at end of file
diff --git a/developers/api-reference/attachments-api/index.html b/developers/api-reference/attachments-api/index.html
index 11f38a117..808cc7ebc 100644
--- a/developers/api-reference/attachments-api/index.html
+++ b/developers/api-reference/attachments-api/index.html
@@ -1,4 +1,4 @@
-Attachments API - DataTrails
+Attachments API - DataTrails
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
@@ -103,4 +103,4 @@
"subject": "user-xxxx@example.com",
"tenantid": "tenant/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"timestamp_accepted": "2019-11-07T15:31:49Z"
-}
Response Parameter
Type
Description
hash
blob hash.
identity
string
blob identity.
issuer
string
principal issuer.
mime_type
string
http mime type.
scanned_bad_reason
string
if scanned as SCANNED_BAD contains a hint of scan result.
scanned_status
string
status of scan.
scanned_timestamp
string
date and time when the attachments has been scanned.
size
integer
size of the blob.
subject
string
principal subject.
tenantid
string
identity of the tenant the blob belongs to.
timestamp_accepted
string
date and time when the request has been received.
Responses
Description
200
A successful response.
400
Returned when the request is badly formed.
404
Returned when the underlying system can’t find the asset.
\ No newline at end of file
diff --git a/developers/api-reference/blobs-api/index.html b/developers/api-reference/blobs-api/index.html
index b9639d25e..2f35a1b96 100644
--- a/developers/api-reference/blobs-api/index.html
+++ b/developers/api-reference/blobs-api/index.html
@@ -1,4 +1,4 @@
-Blobs API - DataTrails
+Blobs API - DataTrails
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
@@ -87,4 +87,4 @@
"subject": "user-xxxx@example.com",
"tenantid": "tenant/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"timestamp_accepted": "2019-11-07T15:31:49Z"
-}
Response Parameter
Type
Description
hash
blob hash.
identity
string
blob identity.
issuer
string
principal issuer.
mime_type
string
http mime type.
scanned_bad_reason
string
if scanned as SCANNED_BAD contains a hint of scan result.
scanned_status
string
status of scan.
scanned_timestamp
string
date and time when the attachments has been scanned.
size
integer
size of the blob.
subject
string
principal subject.
tenantid
string
identity of the tenant the blob belongs to.
timestamp_accepted
string
date and time when the request has been received.
Responses
Description
200
A successful response.
400
Returned when the request is badly formed.
401
Returned when the user is not authenticated to the system.
403
Returned when the user is not authorized to get the blob metadata.
429
Returned when a user exceeds their subscription’s rate limit for requests.
500
Returned when the underlying system returns an error.
\ No newline at end of file
diff --git a/developers/api-reference/caps-api/index.html b/developers/api-reference/caps-api/index.html
index 6cc44a94c..f746be300 100644
--- a/developers/api-reference/caps-api/index.html
+++ b/developers/api-reference/caps-api/index.html
@@ -1,4 +1,4 @@
-Caps API - DataTrails
+Caps API - DataTrails
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
@@ -24,4 +24,4 @@
}]}
-
\ No newline at end of file
diff --git a/developers/api-reference/compliance-api/index.html b/developers/api-reference/compliance-api/index.html
index 44c2c87b0..87026cae2 100644
--- a/developers/api-reference/compliance-api/index.html
+++ b/developers/api-reference/compliance-api/index.html
@@ -1,4 +1,4 @@
-Compliance API - DataTrails
+Compliance API - DataTrails
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
@@ -193,4 +193,4 @@
"event_display_type": "Maintenance Performed",
"identity": "compliance_policies/463fab3a-bae5-4349-8f76-f6454da20c9d",
"time_period_seconds": 86800
-}
Response Parameter
Type
Description
asset_filter
array
Filter
closing_event_display_type
string
compliance_type
description
string
display_name
string
dynamic_variability
number
dynamic_window
string
event_display_type
string
identity
string
richness_assertions
array
Filter
time_period_seconds
string
Responses
Description
200
A successful response.
401
Returned when the user is not authenticated to the system.
403
Returned when the user is not authorized to access the requested resource.
404
Returned when the asset with the id does not exist.
429
Returned when a user exceeds their subscription’s rate limit for requests.
\ No newline at end of file
diff --git a/developers/api-reference/events-api/index.html b/developers/api-reference/events-api/index.html
index 8600fe2c4..e11816e17 100644
--- a/developers/api-reference/events-api/index.html
+++ b/developers/api-reference/events-api/index.html
@@ -1,4 +1,4 @@
-Events API - DataTrails
+Events API - DataTrails
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
@@ -609,4 +609,4 @@
}
Response Parameter
Type
Description
asset_attributes
object
key value mapping of asset attributes
asset_identity
string
identity of a related asset resource assets/11bf5b37-e0b8-42e0-8dcf-dc8c4aefc000
behaviour
string
The behaviour used to create event. RecordEvidence
block_number
string
number of block event was commited on
confirmation_status
string
indicates if the event has been succesfully committed to the blockchain
event_attributes
object
key value mapping of event attributes
from
string
wallet address for the creator of this event
identity
string
identity of a event resource
merklelog_entry
object
verifiable merkle mmr log entry details
operation
string
The operation represented by the event. Record
principal_accepted
object
principal recorded by the server
principal_declared
object
principal provided by the user
tenant_identity
string
Identity of the tenant the that created this event
timestamp_accepted
string
time of event as recorded by the server
timestamp_committed
string
time of event as recorded in verifiable storage
timestamp_declared
string
time of event as declared by the user
transaction_id
string
hash of the transaction as a hex string 0x11bf5b37e0b842e08dcfdc8c4aefc000
transaction_index
string
index of event within commited block
Responses
Description
200
A successful response.
401
Returned when the user is not authenticated to the system.
402
Returned when the user’s quota of Events has been reached.
429
Returned when a user exceeds their subscription’s rate limit for requests.
\ No newline at end of file
diff --git a/developers/api-reference/iam-policies-api/index.html b/developers/api-reference/iam-policies-api/index.html
index 83f0ab491..97db83b82 100644
--- a/developers/api-reference/iam-policies-api/index.html
+++ b/developers/api-reference/iam-policies-api/index.html
@@ -1,4 +1,4 @@
-IAM Policies API - DataTrails
+IAM Policies API - DataTrails
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
@@ -636,4 +636,4 @@
}
],
"page_token": "eyJhbGciOiJSUzI1NiIsImtpZCI6InN0dW50aWR"
-}
Response Parameter
Type
Description
access_policies
array
Describes an Access Policy for OBAC
next_page_token
string
Token to retrieve the next page of results or empty if there are none.
Responses
Description
200
A successful response.
400
Returned when the request is badly formed.
401
Returned when the user is not authenticated to the system.
403
Returned when the user is not authorized to list the access policy.
404
Returned when the identified access policy does not exist.
429
Returned when a user exceeds their subscription’s rate limit for requests.
500
Returned when the underlying storage system returns an error.
\ No newline at end of file
diff --git a/developers/api-reference/iam-subjects-api/index.html b/developers/api-reference/iam-subjects-api/index.html
index 0e700d0b2..2ebe8dd62 100644
--- a/developers/api-reference/iam-subjects-api/index.html
+++ b/developers/api-reference/iam-subjects-api/index.html
@@ -1,4 +1,4 @@
-IAM Subjects API - DataTrails
+IAM Subjects API - DataTrails
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
@@ -160,4 +160,4 @@
"wallet_pub_key": [
"key1"
]
-}
Response Parameter
Type
Description
confirmation_status
display_name
string
Customer friendly name for the subject.
identity
string
Unique identification for the subject, Relative Resource Name
tenant
string
Tenent id
tessera_pub_key
array
Organisation’s tessara wallet keys (BNF)
wallet_address
array
Organisation’s wallet addresses
wallet_pub_key
array
Organisation’s public wallet keys (BNF)
Responses
Description
200
A successful response.
400
Returned when the request is badly formed.
401
Returned when the user is not authenticated to the system.
403
Returned when the user is not authorized to update the subject.
404
Returned when the identified subject does not exist.
429
Returned when a user exceeds their subscription’s rate limit for requests.
500
Returned when the underlying storage system returns an error.
\ No newline at end of file
diff --git a/developers/api-reference/locations-api/index.html b/developers/api-reference/locations-api/index.html
index 6168c001c..a16cdc9c5 100644
--- a/developers/api-reference/locations-api/index.html
+++ b/developers/api-reference/locations-api/index.html
@@ -1,4 +1,4 @@
-Locations API - DataTrails
+Locations API - DataTrails
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
@@ -180,4 +180,4 @@
"orgb"
]
}
-}
Response Parameter
Type
Description
location_identity
string
The location identity in the form: locations/{uuid}
permissions
Subject identities this location is shared with
Responses
Description
200
A successful response.
401
Returned when the user is not authenticated to the system.
403
Returned when the user is not authorized to access permissions for the location.
404
Returned when the identified location does not exist.
429
Returned when a user exceeds their subscription’s rate limit for requests.
\ No newline at end of file
diff --git a/developers/api-reference/public-assets-api/index.html b/developers/api-reference/public-assets-api/index.html
index 333129c00..e2d896788 100644
--- a/developers/api-reference/public-assets-api/index.html
+++ b/developers/api-reference/public-assets-api/index.html
@@ -1,4 +1,4 @@
-Public Assets API - DataTrails
+Public Assets API - DataTrails
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
@@ -274,4 +274,4 @@
}
],
"next_page_token": "abcd"
-}
Response Parameter
Type
Description
events
array
This describes an Event.
next_page_token
string
Token to retrieve the next page of results or empty if there are none.
Responses
Description
200
A successful response.
206
The number of events exceeds the servers limit. The approximate number of matching results is provided by the x-total-count header, the exact limit is available in the content-range header. The value format is ‘items 0-LIMIT/TOTAL’. Note that x-total-count is always present for 200 and 206 responses. It is the servers best available approximation. Similarly, in any result set, you may get a few more than LIMIT items.
Token to retrieve the next page of results or empty if there are none.
Responses
Description
200
A successful response.
206
The number of events exceeds the servers limit. The approximate number of matching results is provided by the x-total-count header, the exact limit is available in the content-range header. The value format is ‘items 0-LIMIT/TOTAL’. Note that x-total-count is always present for 200 and 206 responses. It is the servers best available approximation. Similarly, in any result set, you may get a few more than LIMIT items.
\ No newline at end of file
diff --git a/developers/api-reference/tenancies-api/index.html b/developers/api-reference/tenancies-api/index.html
index 8c94fd0c0..ba2b9fe38 100644
--- a/developers/api-reference/tenancies-api/index.html
+++ b/developers/api-reference/tenancies-api/index.html
@@ -1,4 +1,4 @@
-Tenancies API - DataTrails
+Tenancies API - DataTrails
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
@@ -130,4 +130,4 @@
"identity": "tenant/12149552-f258-430d-922b-4bcd8413ee30"
}
]
-}
Response Parameter
Type
Description
next_page_token
string
Token to retrieve the next page of results or empty if there are none.
tenants
array
Tenant information for a user.
Responses
Description
200
A successful response.
400
Returned when the request is badly formed.
401
Returned when the user is not authenticated to the system.
403
Returned when the user is not authorized to read the user.
404
Returned when the identified user don’t exist.
500
Returned when the underlying storage system returns an error.
\ No newline at end of file
diff --git a/developers/developer-patterns/3rdparty-verification/index.html b/developers/developer-patterns/3rdparty-verification/index.html
new file mode 100644
index 000000000..46fd51c46
--- /dev/null
+++ b/developers/developer-patterns/3rdparty-verification/index.html
@@ -0,0 +1,113 @@
+Verified Replication of the Datatrails Transparency Logs - DataTrails
+
Without the measures described in this article, it is still extremely challenging to compromise a transparency solution based on DataTrails.
To do so, the systems of more than just DataTrails need to be compromised in very specific ways.
+To illustrate this, consider this typical flow for how Data can be used in a transparent and tamper evident way with DataTrails.
Replicated Transparency Logs
This is already a very robust process. For this process to fail, the following steps must all be accomplished:
The source of the Data, which may not be the Owner, must be compromised to substitute the malicious Data.
Owner authentication of the Data, such as adding a signed digest in the metadata, must be compromised.
The DataTrails SaaS database must be compromised.
The DataTrails ledger must be compromised and re-built and re-signed.
Executing such an attack successfully would require significant effort and infiltration of both the Data source and DataTrails.
+Nonetheless, for use-cases where even this small degree of trust in DataTrails is un-acceptable, the recipes in this article ensure the following guarantees are fully independent of DataTrails:
The guarantee of non-falsifiability: Event data can not be falsified.
The guarantee of non-repudiation: Event data can not be removed from the record (ie ‘shredded’ or deleted).
The guarantee of provability: Event data held here and now can be proven to be identical to the data created there and then (creating these proofs does not require the original event data).
The guarantee of demonstrable completeness: Series of events (trails), can be proven to be complete with no gaps or omissions.
These guarantees are “fail safe” against regular data corruption of the log data.
+In the event of individual log entry corruption, verification checks would fail for that entry.
All modifications to the ledger which result in provable changes can be detected without a fully auditable replica.
+By maintaining a fully auditable replica, continued verifiable operation is possible even if DataTrails is prevented from operating.
+To provide this capability, checking that all metadata is exactly as was originally recorded, A copy of the metadata must also be replicated.
+In cases where this capability is required, data retention remains manageable and has completely predictable storage requirements.
+The log format makes it operational very simple to discard data that ceases to be interesting.
The metadata is returned to the Owner when the event is recorded and is available from the regular API endpoints to any other authorized party.
+Obtaining the returned metadata is not covered in this article.
The following recipes make use of these environment:
# DataTrails Public Tenant
+exportPUBLIC_TENANT="tenant/6ea5cd00-c711-3649-6914-7b125928bbb4"
+
+# Synsation Demo Tenant
+# Replace TENANT with your Tenant ID to view your Tenant logs and events
+exportTENANT="tenant/6a009b40-eb55-4159-81f0-69024f89f53c"
+
A sensible value for --horizon is just a little (hours is more than enough) longer than the interval between updates.
+To miss an update for a tenant, more than 16,000 events would need to be recorded in the interval.
The previous command will replicate the logs of all tenants.
+This requires about 3.5 megabytes per 16,000 events.
To restrict a replica to a specific set of tenants, specify those tenants to the watch command.
A common requirement is the public attestation tenant and your own tenant, to accomplish this set $TENANT accordingly and run the following once a week.
To initialize the replica, run the same command once but using an appropriately large --horizon
The remainder of this article discusses the commands replicate-logs and watch in more depth, covering how to replicate selective tenants, explaining the significance of the replicated materials.
How Veracity Supports Integrity and Inclusion Protection#
DataTrail’s log format makes it simple to retain only the portions (massifs) of the log that are interesting.
+Discarding un-interesting portions does not affect the independence or verifiability of the retained log.
This diagram illustrates the logical flow when updating a local replica using veracity.
---
+config:
+theme: classic
+---
+sequenceDiagram
+actor v as Verifier
+box Runs locally to the verifier
+participant V as Veracity
+participant R as Replica
+end
+participant D as DataTrails
+v -->> V: Safely update my replica to massif X please
+V ->> D: Fetch and verify the remote massifs and seals up to X
+V ->> R: Check the verified remote data is consistent with the replica
+V ->> R: Update the replica with verified additions
+V -->> v: All OK!
For the guarantees of non-falsifiability and non-repudiation to be independent of DataTrails, replication and verification of at least the most recently updated massif is necessary.
+The replica must be updated often enough to capture all massifs.
+As a massif, in the default tenant configuration, contains over 16,000 events, the frequency necessary to support this guarantee is both low, and completely determined by the specific tenant of interest.
Massifs verifying events that are no longer interesting can be safely discarded.
+Remembering that the order that events were recorded matches the order of data in the log, it is usually the case that all massifs before a certain point can be discarded together.
Saving the API response data when events are recoded, or obtaining the metadata using the DataTrails events API is additionally required in order to support a full audit for data corruption.
When a a trusted local copy of the verifiable log is included in the “verify before use” process, it is reasonable to rely on DataTrails storage of the metadata.
+If the DataTrails storage of the metadata is changed, the verification will “fail safe” against the local replicated log because the changed data will not verify against the local replica.
+While this is a “false negative”, it ensures safety in the face of accidental or malicious damage to the DataTrails storage systems without the burden of maintaining copies of the metadata recorded in DataTrails.
+Once the unsafe action is blocked, it is very use-case dependent what the appropriate next steps are. The common thread is that is critical that the action must be blocked in the first instance.
When the metadata is fetched, if it can be verified against the log replica, it proves that the DataTrails storage remains correct.
+If it does not verify, it is proven that the metadata held by DataTrails is incorrect, though the Data being processed by the Consumer may still be correct and safe.
The veracityreplicate-logs and watch are used to maintain the replica of the verifiable log.
veracity watch is used to give notice of which tenants have updates to their logs that need to be considered for replication.
veracity replicate-logs performs the activities in the diagram above. It can be directed to examine a specific tenant, or it can be provided with the output of veracity watch
Every DataTrails log is a series of one or more massifs.
+The last, called the head, is where verification data for new events are recorded.
+Once the head is full, a new head automatically starts.
This means there are 3 basic scenarios veracity copes with when updating a replica.
Updating the currently open replicated massif with the new additions in the DataTrails open massif.
Replicating the start of a new open massif from DataTrails.
Replicating a limited number of new massifs from DataTrails, performing local consistency checks only if the replicated massifs follow the latest local massif.
The first is the simplest to understand. In the diagram below the dashed boxes correspond to the open massifs.
The local replica of the open massif will always be equal or less in size than the remote.
+Once veracity verifies the remote copy is consistent with the remote seal, it will then check the new data copied from the remote is consistent with its local copy of the open massif.
+Consistent simply means it is an append, and that the remote has not “dropped” anything that it contained the last time it was replicated.
If there is any discrepancy in any of these checks, the current local data is left unchanged.
The local replica starts out only having Massifs 0 & 1.
+And 1 happens to be complete.
+On the next event recorded by DataTrails, a new remote massif, Massif 2, is created.
+More events may be recorded before the replica is updated.
+Each massif contains verification data for a little over 16,000 events.
+Provided the replication commands are run before Massif 2 is also filled, we are dealing with this case.
The local Massif 1 is read because, before copying the remote Massif 2 into the local replica, its consistency against both the remote seal and the previous local massif, Massif 1, are checked.
Once those checks are successfully made, the local replica gains its initial copy of Massif 2.
By default, veracity will fetch and verify all massifs, up to the requested, that follow on immediately after the most recent local massif.
+In this case, where we request --massif 4 the default would be to fetch, verify and replicate Massifs 2, 3 & 4.
By default, a full tenant log is replicated.
+The storage requirements are roughly 4mb per massif, and each massif has the verification data for about 16,000 events.
To provide a means to bound the size of the local replica and also to bound the amount of work, we provide the --ancestors option.
+This specifies a fixed limit on the number of massifs that will be fetched.
+In this example, the limit is 0, meaning massif 4 is fetched and verified, and we leave a gap between the local massifs 2 & the new local massif 4.
+The gap means the consistency of the remote massif 4 is not checked against the local replica.
The command veracity replicate-logs --ancestors 0 --massif 4 requests that massif 4 is verified and then replicated locally, but prevents it from being verified for consistency against the current local replica.
There has been no activity in any tenant for the default watch horizon (how far back we look for changes).
To set an explicit, and in this example very large, horizon try the following:
veracity watch --horizon 10000h
+
The watch command is used to determine the massifindex, even when you are only interested in a single tenant.
+You then provide that index to the replicate-logs command using the --massif option:
By default, all massifs up to and including the massif specified by --massif <N> are verified remotely and checked for consistency against the local replica (following the logical steps in the diagram above).
The numbered .log files are the verifiable data for your log.
The .sth files are
+COSE Sign1 binary format signed messages.
+Each .sth is associated with the identically numbered massif.
+The log root material in the .sth signature attests to the entire state of the log up to the end of the associated massif.
+The details of consuming the binary format of the seal and verifying the signature are beyond the scope of this article.
However, the implementation used by veracity can be found in the open source merkle log library maintained by DataTrails
+go-datatrails-merklelog
To be sure mistaken, or malicious, changes to DataTrails data stores can always be detected run this command about once a week:
+veracity --tenant $TENANT watch --horizon 180h | veracity replicate-logs --replicadir merklelogs
This process guarantees you can’t be misrepresented, any alternate version of events would be provably false.
To guarantee continued operation even if DataTrails is prevented from operating, a copy of the DataTrails metadata must be retained.
You can reasonably chose to trust DataTrails copy, because, even in the most extreme cases, it is “fail-safe” if DataTrails SaaS storage is compromised, when combined with a replicated verifiable merkle log.
DataTrails Assets can be used to track the status, contents, location, and other key attributes of containers over time. This can also be done for containers within containers. For example, you may wish to track bags inside boxes that are inside a shipping container being transported on a train.
A Container Asset is not a special type of asset, it is a label that is given to an Asset that has been created to represent a container. For more detail on the Asset creation process, please see our
DataTrails Overview guide. For this example, we will create a simple asset that we will call Shipping Container. Note that with DataTrails, we could also record more complex attributes such as size of the container, weight, location, or any other important details. For now, we will create a minimal Asset that includes the name and type.
Create the Shipping Container
<div class="modal-body">
@@ -98,4 +98,4 @@
curl -g -X GET \
-H "$HOME/.datatrails/bearer-token.txt"\
"https://app.datatrails.ai/archivist/v2/assets?attributes.within_container=Shipping%20Container"| jq
-
The document_ prefix is used to designate attributes that are part of the profile. Some of these are interpreted by DataTrails and others are guidelines.
If a document is no longer required, or if for any reason it is decided that it should no longer be used, then a document can be withdrawn.
-Withdrawal is optional and it is usually the final event in the document lifecycle. It can be reversed in DataTrails by publishing a new version.
Withdraw an entire document (mark that it is no longer considered current.)
Event Attributes
Meaning
Requirement
arc_display_type
Tells DataTrails how to interpret Event
Required, must be set to Withdraw
document_withdrawal_reason
Reason why document has been withdrawn
Optional, but encouraged
Asset Attributes
Meaning
Requirement
document_status
Label for filtering and accommodating critical document lifecycle events
\ No newline at end of file
+Withdrawal is optional and it is usually the final event in the document lifecycle. It can be reversed in DataTrails by publishing a new version.
Withdraw an entire document (mark that it is no longer considered current.)
Event Attributes
Meaning
Requirement
arc_display_type
Tells DataTrails how to interpret Event
Required, must be set to Withdraw
document_withdrawal_reason
Reason why document has been withdrawn
Optional, but encouraged
Asset Attributes
Meaning
Requirement
document_status
Label for filtering and accommodating critical document lifecycle events
\ No newline at end of file
diff --git a/developers/developer-patterns/getting-access-tokens-using-app-registrations/index.html b/developers/developer-patterns/getting-access-tokens-using-app-registrations/index.html
index 17589a55d..aa1ea9338 100644
--- a/developers/developer-patterns/getting-access-tokens-using-app-registrations/index.html
+++ b/developers/developer-patterns/getting-access-tokens-using-app-registrations/index.html
@@ -1,4 +1,4 @@
-Creating Access Tokens Using a Custom Integration - DataTrails
+Creating Access Tokens Using a Custom Integration - DataTrails
Non-interactive access to the DataTrails platform is managed by creating Integrations with either a Custom Integration or one of the built-in Integrations. This is done using either the Settings or Integrations menus in the DataTrails UI or by using the App Registrations API directly.
Note: App Registration is the old name for a Custom Integration.
Custom Integrations have a CLIENT_ID and a SECRET, these are used to authenticate with DataTrails IAM endpoints using
JSON Web Tokens (JWT).
DataTrails authentication uses the industry-standard OIDC Client Credentials Flow.
The high level steps are:
Create an Integration in the UI
Define access permissions for the Integration in the UI
Request an Access Token using the API
Use the Access Token to make a REST API call to your tenancy.
If you have already saved a CLIENT_ID and a SECRET, with the correct
@@ -93,4 +93,4 @@
"iss":"https://app.datatrails.ai/appidpv1","aud":"https://app.datatrails.ai/archivist"}
-
This sub-section of the Developers subject area contains more detailed information on topics that cannot be covered by the API or YAML Runner references.
You will find articles on common developer tasks and concept guides that are relevant to developers.
Check out the articles below for more information!
This sub-section of the Developers subject area contains more detailed information on topics that cannot be covered by the API or YAML Runner references.
You will find articles on common developer tasks and concept guides that are relevant to developers.
Check out the articles below for more information!
\ No newline at end of file
diff --git a/developers/developer-patterns/index.xml b/developers/developer-patterns/index.xml
index 8759a807d..78a83bb5c 100644
--- a/developers/developer-patterns/index.xml
+++ b/developers/developer-patterns/index.xml
@@ -9,4 +9,6 @@ In this guide we’ll explore how you can use Veracity to:
Prove the inclusion of events that matter in the DataTrails merkle log with verify-included Explore the DataTrails merkle log using the node command Prerequisites Have downloaded and installed Veracity using the instructions found here Verifying Event Data DataTrails records the events that matter to your business and lets you prove them at a later date.Navigating the Merkle Loghttps://docs.datatrails.ai/developers/developer-patterns/navigating-merklelogs/Mon, 01 Jan 0001 00:00:00 +0000https://docs.datatrails.ai/developers/developer-patterns/navigating-merklelogs/This article explains how to navigate the Merkle Log, using the DataTrails Merkle Mountain Range implementation.
DataTrails publishes the data necessary for immediately verifying events to highly available commodity cloud storage. “Verifiable data” is synonymous with log or transparency log. Once verifiable data is written to the log it is never changed. The log only grows, it never shrinks and data in it never moves within the log.
To work with Merkle Log format, DataTrails provides open-source tooling for working in offline environments.Massif blob pre-calculated offsetshttps://docs.datatrails.ai/developers/developer-patterns/massif-blob-offset-tables/Mon, 01 Jan 0001 00:00:00 +0000https://docs.datatrails.ai/developers/developer-patterns/massif-blob-offset-tables/This page provides lookup tables for navigating the dynamic, but computable, offsets into the Merkle log binary format. The algorithms to reproduce this are relatively simple. DataTrails provides open-source implementations, but in many contexts it is simpler to use these pre-calculations. These tables can be made for any log configuration at any time, in part or in whole, without access to any specific log.
-This is a quick review of the log format.Quickstart: SCITT Statements (Preview)https://docs.datatrails.ai/developers/developer-patterns/scitt-api/Wed, 09 Jun 2021 13:49:35 +0100https://docs.datatrails.ai/developers/developer-patterns/scitt-api/The SCITT API is currently in preview and subject to change The Supply Chain Integrity, Transparency and Trust (SCITT) initiative is a set of IETF standards for managing the compliance and auditability of goods and services across end-to-end supply chains. SCITT supports the ongoing verification of goods and services where the authenticity of entities, evidence, policy, and artifacts can be assured and the actions of entities can be guaranteed to be authorized, non-repudiable, immutable, and auditable.
\ No newline at end of file
+This is a quick review of the log format.Verified Replication of the Datatrails Transparency Logshttps://docs.datatrails.ai/developers/developer-patterns/3rdparty-verification/Thu, 22 Aug 2024 19:35:35 +0100https://docs.datatrails.ai/developers/developer-patterns/3rdparty-verification/Introduction Without the measures described in this article, it is still extremely challenging to compromise a transparency solution based on DataTrails.
+To do so, the systems of more than just DataTrails need to be compromised in very specific ways. To illustrate this, consider this typical flow for how Data can be used in a transparent and tamper evident way with DataTrails.
+Replicated Transparency Logs This is already a very robust process.Quickstart: SCITT Statements (Preview)https://docs.datatrails.ai/developers/developer-patterns/scitt-api/Wed, 09 Jun 2021 13:49:35 +0100https://docs.datatrails.ai/developers/developer-patterns/scitt-api/The SCITT API is currently in preview and subject to change The Supply Chain Integrity, Transparency and Trust (SCITT) initiative is a set of IETF standards for managing the compliance and auditability of goods and services across end-to-end supply chains. SCITT supports the ongoing verification of goods and services where the authenticity of entities, evidence, policy, and artifacts can be assured and the actions of entities can be guaranteed to be authorized, non-repudiable, immutable, and auditable.
\ No newline at end of file
diff --git a/developers/developer-patterns/massif-blob-offset-tables/index.html b/developers/developer-patterns/massif-blob-offset-tables/index.html
index 68a085115..036616991 100644
--- a/developers/developer-patterns/massif-blob-offset-tables/index.html
+++ b/developers/developer-patterns/massif-blob-offset-tables/index.html
@@ -1,4 +1,4 @@
-Massif blob pre-calculated offsets - DataTrails
+Massif blob pre-calculated offsets - DataTrails
Lookup tables for navigating the dynamic, but computable, offsets into the Merkle log binary format
This page provides lookup tables for navigating the dynamic, but computable, offsets into the Merkle log binary format.
The algorithms to reproduce this are relatively simple.
@@ -126,4 +126,4 @@
}returnsum;}
-
Namespace is a tool that can be used to prevent unwanted interactions when multiple users are performing testing in the same Tenancy. Using two separate namespaces prevents collisions that may cause undesirable results by allowing multiple users to interact with the same Assets and Events without interrupting each other.
Namespace can be added as an attribute within the files you are testing, or as a variable in your Bash environment.
To add namespace as an attribute to your files, use the arc_namespace key. For example:
To use namespace as a variable, such as the date, add the argument to your Bash environment:
exportTEST_NAMESPACE=date
See
-TEST_NAMESPACE in our GitHub repository for more information. TEST_NAMESPACE can also be added to your Bash profile to be automatically picked up when testing.
\ No newline at end of file
+TEST_NAMESPACE in our GitHub repository for more information. TEST_NAMESPACE can also be added to your Bash profile to be automatically picked up when testing.
\ No newline at end of file
diff --git a/developers/developer-patterns/navigating-merklelogs/index.html b/developers/developer-patterns/navigating-merklelogs/index.html
index 2ea64780d..e53310a2d 100644
--- a/developers/developer-patterns/navigating-merklelogs/index.html
+++ b/developers/developer-patterns/navigating-merklelogs/index.html
@@ -1,4 +1,4 @@
-Navigating the Merkle Log - DataTrails
+Navigating the Merkle Log - DataTrails
Accessing the data needed to verify from first principals
This article explains how to navigate the Merkle Log, using the DataTrails Merkle Mountain Range implementation.
DataTrails publishes the data necessary for immediately verifying events to highly available commodity cloud storage.
“Verifiable data” is synonymous with log or transparency log.
@@ -416,4 +416,4 @@
Snowflake ID scheme.
The DataTrails implementation can be found at
nextid.go↩︎
Such a path of hashes is commonly referred to as a “proof”, a “witness”, and an “authentication path”.
-A Merkle Tree is sometimes referred to as authenticated data structures or a verifiable data structure. For the purposes of this article, there is no meaningful difference. They are all the same thing. We stick to “verification” and “verifiable data structure” in this article. ↩︎
\ No newline at end of file
+A Merkle Tree is sometimes referred to as authenticated data structures or a verifiable data structure. For the purposes of this article, there is no meaningful difference. They are all the same thing. We stick to “verification” and “verifiable data structure” in this article. ↩︎
How to push a collection of Statements using SCITT APIs
The SCITT API is currently in preview and subject to change
The Supply Chain Integrity, Transparency and Trust (SCITT) initiative is a set of
IETF standards for managing the compliance and auditability of goods and services across end-to-end supply chains.
@@ -64,4 +64,4 @@
https://app.datatrails.ai/archivist/v2/publicassets/-/events?event_attributes.subject=$SUBJECT| jq
Coming soon: Filter on specific content types, such as what SBOMs have been registered, or which issuers have made statements.
The quickstart created a collection of statements for a given artifact.
Over time, as new information is available, authors can publish new statements which verifiers and consumers can benefit from, making decisions specific to their environment.
There are no limits to the types of additional statements that may be registered, which may include new vulnerability information, notifications of new versions, end of life (EOL) notifications, or more.
-By using the content-type parameter, verifiers can filter to specific types, filter statements by the issuer, or other headers & metadata.
\ No newline at end of file
+By using the content-type parameter, verifiers can filter to specific types, filter statements by the issuer, or other headers & metadata.
\ No newline at end of file
diff --git a/developers/developer-patterns/sitemap.xml b/developers/developer-patterns/sitemap.xml
index 8c9a1c25b..27f25d57c 100644
--- a/developers/developer-patterns/sitemap.xml
+++ b/developers/developer-patterns/sitemap.xml
@@ -1 +1 @@
-/developers/developer-patterns/getting-access-tokens-using-app-registrations/2023-09-27T11:12:25+01:00weekly0.5/developers/developer-patterns/containers-as-assets/2021-05-31T15:18:01+01:00weekly0.5/developers/developer-patterns/namespace/2021-05-31T15:18:01+01:00weekly0.5/developers/developer-patterns/document-profile/2021-05-31T15:18:01+01:00weekly0.5/developers/developer-patterns/software-package-profile/2023-06-26T11:56:01+01:00weekly0.5/developers/developer-patterns/veracity/2024-08-22T19:35:35+01:00weekly0.5/developers/developer-patterns/navigating-merklelogs/weekly0.5/developers/developer-patterns/massif-blob-offset-tables/weekly0.5/developers/developer-patterns/scitt-api/2021-06-09T13:49:35+01:00weekly0.5
\ No newline at end of file
+/developers/developer-patterns/getting-access-tokens-using-app-registrations/2023-09-27T11:12:25+01:00weekly0.5/developers/developer-patterns/containers-as-assets/2021-05-31T15:18:01+01:00weekly0.5/developers/developer-patterns/namespace/2021-05-31T15:18:01+01:00weekly0.5/developers/developer-patterns/document-profile/2021-05-31T15:18:01+01:00weekly0.5/developers/developer-patterns/software-package-profile/2023-06-26T11:56:01+01:00weekly0.5/developers/developer-patterns/veracity/2024-08-22T19:35:35+01:00weekly0.5/developers/developer-patterns/navigating-merklelogs/weekly0.5/developers/developer-patterns/massif-blob-offset-tables/weekly0.5/developers/developer-patterns/3rdparty-verification/2024-08-22T19:35:35+01:00weekly0.5/developers/developer-patterns/scitt-api/2021-06-09T13:49:35+01:00weekly0.5
\ No newline at end of file
diff --git a/developers/developer-patterns/software-package-profile/index.html b/developers/developer-patterns/software-package-profile/index.html
index 10fbe17c6..15c0ac2a7 100644
--- a/developers/developer-patterns/software-package-profile/index.html
+++ b/developers/developer-patterns/software-package-profile/index.html
@@ -1,4 +1,4 @@
-Software Package Profile - DataTrails
+Software Package Profile - DataTrails
The DataTrails Software Package profile is a set of suggested Asset and Event attributes that enable the recording of an immutable and verifiable Software Bill of Materials (SBOM).
The
NTIA describes a SBOM as “a formal record containing the details and supply chain relationships of various components used in building software.”
A unique identifier for the Package, DataTrails provides a Unique ID per asset but it may be preferred to include an existing internal reference instead
Required
N/A
sbom_repo
Link to the Git Repo of the Component
Optional
N/A
sbom_release_notes
Link to the release notes of the package version
Optional
N/A
sbom_license
The licensing used by the component (if specified)
In the API, you must express public as an asset attribute and have true as a property to make an SBOM public. The default is ‘false’.
@@ -43,4 +43,4 @@
"public":true}
Software Package Profile Event Types and Attributes#
A Release is the event used by a Supplier to provide an SBOM for their Software Package in DataTrails.
The Release attributes tracked in DataTrails should minimally represent the base information required by the NTIA standard and be recorded in two, separate, lists of attributes; Asset Attributes would track details about the latest release of the SBOM at the time of the event creation, the Event Attributes then track details about the release of the SBOM that is being submitted.
The sbom_ prefix is used to designate attributes that are part of the event and asset. Some of these are interpreted by DataTrails and others are guidelines
NTIA Attribute
Event Attributes
Meaning
Requirement
N/A
arc_display_type
Tells DataTrails how to interpret Event
Required, must set to Release
Author Name
sbom_author
The name of the Package Author
Required
Supplier Name
sbom_supplier
The name of the Package Author
Required
Component Name
sbom_component
The name of the Package
Required
Version String
sbom_version
The version of the Package
Required
Unique Identifier
sbom_uuid
A unique identifier for the Package, DataTrails provides a Unique ID per asset but it may be preferred to include an existing internal reference instead
Required
N/A
sbom_repo
Link to the Git Repo of the Component
Optional
N/A
sbom_release_notes
Link to the release notes of the release
Optional
N/A
sbom_license
The licensing used by the component (if specified)
Optional
N/A
sbom_exception
If included value is always true
Optional
N/A
sbom_vuln_reference
If this release resolves a specific vulnerability you can highlight a shared Vulnerability reference number(s)
Optional
NTIA Attribute
Asset Attributes
Meaning
Requirement
Author Name
sbom_author
The name of the Package Author
Required
Supplier Name
sbom_supplier
The name of the Package Supplier
Required
Component Name
sbom_component,(arc_display_name if appropriate)
The name of the Software Package
Required
Version String
sbom_version
The version of the Software Package
Required
Unique Identifier
sbom_uuid
A unique identifier for the Package, DataTrails provides a Unique ID per asset but it may be preferred to include an existing internal reference instead
Required
N/A
sbom_repo
Link to the Git Repo of the Component
Optional
N/A
sbom_release_notes
Link to the release notes of the package version
Optional
N/A
sbom_license
The licensing used by the component (if specified)
When used in tandem with Release Plan and Accepted events the exception is a useful record of when an emergency has caused a release to be pushed without needing an initial approval or plan.
Release events can be optionally enhanced by using ‘Release Plan’ and ‘Release Accepted’ events alongside them.
Release Plan events demonstrate an intent to introduce a new release, it should describe which version you want to release and who wants to release it. For example, it could include draft release notes explaining what is being updated and why it should be updated.
Release Accepted events demonstrate an approval on a Release Plan to go forward, it may be that the plan details a need to introduce a fix for a specific vulnerability and the security team is needed to sign off the release going forward.
These events are not essential to the process so can be omitted in a standard or minimal deployment but they are actively encouraged. As they should not affect the information about the latest Software Package Release there should be no Asset Attributes included, other NTIA attributes may also not be necessary or not available until release (e.g. Component Hash).
The Key Attribute that should be recorded is the version of the release that is being planned and accepted.
The sbom_planned_ prefix is used to designate attributes that are part of the event. Some of these are interpreted by DataTrails and others are guidelines.
NTIA Attribute
Event Attributes
Meaning
Requirement
N/A
arc_display_type
Tells DataTrails how to interpret Event
Required, must set to Release Plan
Component Name
sbom_planned_component
The planned name of the Package
Required
Version String
sbom_planned_version
The planned version of the Package
Required
N/A
sbom_planned_reference
A reference number for the plan (such as internal change request number)
Required
N/A
sbom_planned_date
The planned release date
Required
N/A
sbom_planned_captain
The planned Release Captain (a common term for someone who is responsible for performing a Release; someone like an Owner in Agile serves a different purpose but may also be used if appropriate). This is mandatory as it describes who should be responsible for the release
Required
Author Name
sbom_planned_author
The planned name of the Package Author
Optional
Supplier Name
sbom_planned_supplier
The planned name of the Package Supplier
Optional
Component Hash
sbom_planned_hash
The planned hash of the component files/installation (per version)
Optional
Unique Identifier
sbom_planned_uuid
The planned unique identifier for the Package, DataTrails provides a Unique ID per asset but it may be preferred to include an existing internal reference instead
Optional
N/A
sbom_planned_license
If there is an intended change to the license this may be needed
Optional
N/A
sbom_planned_vuln_reference
If this release intends to resolve a specific vulnerability you can highlight a shared Vulnerability reference number(s)
The sbom_accepted_ prefix is used to designate attributes that are part of the event. Some of these are interpreted by DataTrails and others are guidelines.
NTIA Attribute
Event Attributes
Meaning
Requirement
N/A
arc_display_type
Tells DataTrails how to interpret Event
Required, must set to Release Accepted
Component Name
sbom_accepted_component
The accepted name of the Package
Required
Version String
sbom_accepted_version
The accepted version of the Package
Required
N/A
sbom_accepted_reference
The reference number of the associated plan
Required
N/A
sbom_accepted_date
The accepted release date
Required
N/A
sbom_accepted_captain
The accepted Release Captain (a common term for someone who is responsible for performing a Release; someone like an Owner in Agile serves a different purpose but may also be used if appropriate). This is mandatory as it describes who should be responsible for the release
Required
N/A
sbom_accepted_approver
Describes who has accepted the plan
Required
Author Name
sbom_accepted_author
The accepted name of the Package Author
Optional
Supplier Name
sbom_accepted_supplier
The accepted name of the Package Supplier
Optional
Component Hash
sbom_accepted_hash
The accepted hash of the component files/installation (per version)
Optional
Unique Identifier
sbom_accepted_uuid
The accepted unique identifier for the Package, DataTrails provides a Unique ID per asset but it may be preferred to include an existing internal reference instead
Optional
N/A
sbom_accepted_vuln_reference
If this release intends to resolve a specific vulnerability you can highlight a shared Vulnerability reference number(s)
Patches are often supplied to customer in an Out-Of-Band procedure to address critical bugs or vulnerabilities, usually with a short-term turnaround that can be outside the normal release cadence.
It is typically expected a Patch should contain its own SBOM separate to the Primary SBOM.
The sbom_patch_ prefix is used to designate attributes that are part of the event. Some of these are interpreted by DataTrails and others are guidelines.
NTIA Attribute
Event Attributes
Meaning
Requirement
N/A
arc_display_type
Tells DataTrails how to interpret Event
Required, must set to Patch
Component Name
sbom_patch_target_component
The component the Patch targets
Required
Version String
sbom_patch_version
The version string of the Patch
Required
Author Name
sbom_patch_author
The name of the Patch Author
Required
Supplier Name
sbom_patch_supplier
The name of the Patch Supplier
Required
Component Hash
sbom_patch_hash
The hash of the Patch files/installation (per version)
Required
Unique Identifier
sbom_patch_uuid
The accepted unique identifier for the Package, DataTrails provides a Unique ID per asset but it may be preferred to include an existing internal reference instead
Required
N/A
sbom_patch_target_version
The version of the component the patch is targeted/built from
Required
N/A
sbom_patch_repo
Link to the Git Repo/Fork/Branch of the Component (if different to the latest release repo)
Optional
N/A
sbom_patch_license
The licensing used by the component (if specified and different to the latest release license)
Optional
N/A
sbom_patch_vuln_reference
If this patch resolves a specific vulnerability you can highlight a shared Vulnerability reference number
These Event types are used for vulnerability management.
-The first is to disclose knowledge of a vulnerability and the second is to update the status of the vulnerability after investigation is complete.
Reference Number (e.g. internal tracking number), useful when there may be multiple updates to a vulnerability during an investigation and for referencing when a particular release is expected to solve a vulnerability
Required
vuln_id
Specific ID of Vulnerability (e.g CVE-2018-0171)
Required
vuln_category
Type of Vulnerability (e.g. CVE)
Required
vuln_severity
Severity of Vulnerability (e.g. HIGH)
Required
vuln_status
Whether the Vulnerability actually affects your component or is being investigated (e.g Known_not_affected)
Reference Number (e.g. internal tracking number), useful when there may be multiple updates to a vulnerability during an investigation and for referencing when a particular release is expected to solve a vulnerability
Required
vuln_id
Specific ID of Vulnerability (e.g CVE-2018-0171)
Required
vuln_category
Type of Vulnerability (e.g. CVE)
Required
vuln_severity
Severity of Vulnerability (e.g. HIGH)
Required
vuln_status
Whether the Vulnerability actually affects your component or is being investigated (e.g Known_not_affected)
The sbom_eol_ prefix is used to designate attributes that are part of the event. All of these are interpreted by DataTrails.
An event to mark the Package as End of Life.
NTIA Attribute
Event Attributes
Meaning
Requirement
N/A
arc_display_type
Tells DataTrails how to interpret Event
Required, must set to EOL
Component Name
sbom_eol_target_component
The component the EOL targets
Required
Version String
sbom_eol_target_version
The version string affected by the EOL
Required
Author Name
sbom_eol_author
The name of the EOL Author
Required
Unique Identifier
sbom_eol_uuid
The accepted unique identifier for the Package, DataTrails provides a Unique ID per asset but it may be preferred to include an existing internal reference instead
\ No newline at end of file
+The first is to disclose knowledge of a vulnerability and the second is to update the status of the vulnerability after investigation is complete.
Reference Number (e.g. internal tracking number), useful when there may be multiple updates to a vulnerability during an investigation and for referencing when a particular release is expected to solve a vulnerability
Required
vuln_id
Specific ID of Vulnerability (e.g CVE-2018-0171)
Required
vuln_category
Type of Vulnerability (e.g. CVE)
Required
vuln_severity
Severity of Vulnerability (e.g. HIGH)
Required
vuln_status
Whether the Vulnerability actually affects your component or is being investigated (e.g Known_not_affected)
Reference Number (e.g. internal tracking number), useful when there may be multiple updates to a vulnerability during an investigation and for referencing when a particular release is expected to solve a vulnerability
Required
vuln_id
Specific ID of Vulnerability (e.g CVE-2018-0171)
Required
vuln_category
Type of Vulnerability (e.g. CVE)
Required
vuln_severity
Severity of Vulnerability (e.g. HIGH)
Required
vuln_status
Whether the Vulnerability actually affects your component or is being investigated (e.g Known_not_affected)
The sbom_eol_ prefix is used to designate attributes that are part of the event. All of these are interpreted by DataTrails.
An event to mark the Package as End of Life.
NTIA Attribute
Event Attributes
Meaning
Requirement
N/A
arc_display_type
Tells DataTrails how to interpret Event
Required, must set to EOL
Component Name
sbom_eol_target_component
The component the EOL targets
Required
Version String
sbom_eol_target_version
The version string affected by the EOL
Required
Author Name
sbom_eol_author
The name of the EOL Author
Required
Unique Identifier
sbom_eol_uuid
The accepted unique identifier for the Package, DataTrails provides a Unique ID per asset but it may be preferred to include an existing internal reference instead
Veracity is an open-source command line tool developed by DataTrails. With it, you can explore the
merkle log and prove the inclusion of your event data. By default it connects to the DataTrails
@@ -101,4 +101,4 @@
The value returned is the hash stored at that node:
Leaf nodes in the merkle log contain the hash of the event data (plus some metadata, see
this article) while
-intermediate nodes hash together the content of their left and right children.
If you are a developer who is looking to easily add provenance to their data, this section is for you. You may be building a new application or looking for a way to add functionality to something that you already use every day.
The DataTrails REST API, python SDK, or the YAML runner provide a simple way for you to integrate a provenance layer into your existing data platform so that you do not need to change the way that your users work.
Check out the sub-sections below for more information!
Developer Patterns → Go here for information on setting up an App Registration, requesting an Access Token together with other developer concepts and user profile descriptions.
API Reference → The DataTrails REST API endpoint examples and definitions can be found here.
YAML Runner Reference → The YAML reference contains information and examples for those who work with YAML files and would prefer to use this method to access the API.
If you are a developer who is looking to easily add provenance to their data, this section is for you. You may be building a new application or looking for a way to add functionality to something that you already use every day.
The DataTrails REST API, python SDK, or the YAML runner provide a simple way for you to integrate a provenance layer into your existing data platform so that you do not need to change the way that your users work.
Check out the sub-sections below for more information!
Developer Patterns → Go here for information on setting up an App Registration, requesting an Access Token together with other developer concepts and user profile descriptions.
API Reference → The DataTrails REST API endpoint examples and definitions can be found here.
YAML Runner Reference → The YAML reference contains information and examples for those who work with YAML files and would prefer to use this method to access the API.
\ No newline at end of file
diff --git a/developers/sitemap.xml b/developers/sitemap.xml
index 151fb61e4..3ab0e92df 100644
--- a/developers/sitemap.xml
+++ b/developers/sitemap.xml
@@ -1 +1 @@
-/developers/developer-patterns/2023-05-31T10:14:18+01:00weekly0.5/developers/yaml-reference/2023-05-31T10:14:18+01:00weekly0.5/developers/api-reference/2021-06-09T10:19:37+01:00weekly0.5/developers/developer-patterns/getting-access-tokens-using-app-registrations/2023-09-27T11:12:25+01:00weekly0.5/developers/developer-patterns/containers-as-assets/2021-05-31T15:18:01+01:00weekly0.5/developers/developer-patterns/namespace/2021-05-31T15:18:01+01:00weekly0.5/developers/developer-patterns/document-profile/2021-05-31T15:18:01+01:00weekly0.5/developers/developer-patterns/software-package-profile/2023-06-26T11:56:01+01:00weekly0.5/developers/developer-patterns/veracity/2024-08-22T19:35:35+01:00weekly0.5/developers/developer-patterns/navigating-merklelogs/weekly0.5/developers/developer-patterns/massif-blob-offset-tables/weekly0.5/developers/developer-patterns/scitt-api/2021-06-09T13:49:35+01:00weekly0.5/developers/yaml-reference/story-runner-components/2021-06-09T11:39:03+01:00weekly0.5/developers/yaml-reference/assets/2021-06-09T11:39:03+01:00weekly0.5/developers/yaml-reference/events/2021-06-09T11:39:03+01:00weekly0.5/developers/yaml-reference/locations/2021-06-09T11:39:03+01:00weekly0.5/developers/yaml-reference/subjects/2021-06-09T11:39:03+01:00weekly0.5/developers/yaml-reference/compliance/2021-06-09T11:39:03+01:00weekly0.5/developers/yaml-reference/estate-info/2021-06-09T11:39:03+01:00weekly0.5/developers/api-reference/app-registrations-api/2021-06-09T11:39:03+01:00weekly0.5/developers/api-reference/assets-api/2021-06-09T11:39:03+01:00weekly0.5/developers/api-reference/attachments-api/2021-06-09T12:05:02+01:00weekly0.5/developers/api-reference/blobs-api/2021-06-09T13:32:57+01:00weekly0.5/developers/api-reference/compliance-api/2021-06-09T12:07:13+01:00weekly0.5/developers/api-reference/events-api/2021-06-09T11:48:40+01:00weekly0.5/developers/api-reference/iam-policies-api/2021-06-09T12:02:15+01:00weekly0.5/developers/api-reference/iam-subjects-api/2021-06-09T12:02:15+01:00weekly0.5/developers/api-reference/locations-api/2021-06-09T11:56:23+01:00weekly0.5/developers/api-reference/public-assets-api/2021-06-09T11:56:23+01:00weekly0.5/developers/api-reference/tenancies-api/2021-06-09T13:29:57+01:00weekly0.5/developers/api-reference/caps-api/2024-03-05T11:30:29+00:00weekly0.5
\ No newline at end of file
+/developers/developer-patterns/2023-05-31T10:14:18+01:00weekly0.5/developers/yaml-reference/2023-05-31T10:14:18+01:00weekly0.5/developers/api-reference/2021-06-09T10:19:37+01:00weekly0.5/developers/developer-patterns/getting-access-tokens-using-app-registrations/2023-09-27T11:12:25+01:00weekly0.5/developers/developer-patterns/containers-as-assets/2021-05-31T15:18:01+01:00weekly0.5/developers/developer-patterns/namespace/2021-05-31T15:18:01+01:00weekly0.5/developers/developer-patterns/document-profile/2021-05-31T15:18:01+01:00weekly0.5/developers/developer-patterns/software-package-profile/2023-06-26T11:56:01+01:00weekly0.5/developers/developer-patterns/veracity/2024-08-22T19:35:35+01:00weekly0.5/developers/developer-patterns/navigating-merklelogs/weekly0.5/developers/developer-patterns/massif-blob-offset-tables/weekly0.5/developers/developer-patterns/3rdparty-verification/2024-08-22T19:35:35+01:00weekly0.5/developers/developer-patterns/scitt-api/2021-06-09T13:49:35+01:00weekly0.5/developers/yaml-reference/story-runner-components/2021-06-09T11:39:03+01:00weekly0.5/developers/yaml-reference/assets/2021-06-09T11:39:03+01:00weekly0.5/developers/yaml-reference/events/2021-06-09T11:39:03+01:00weekly0.5/developers/yaml-reference/locations/2021-06-09T11:39:03+01:00weekly0.5/developers/yaml-reference/subjects/2021-06-09T11:39:03+01:00weekly0.5/developers/yaml-reference/compliance/2021-06-09T11:39:03+01:00weekly0.5/developers/yaml-reference/estate-info/2021-06-09T11:39:03+01:00weekly0.5/developers/api-reference/app-registrations-api/2021-06-09T11:39:03+01:00weekly0.5/developers/api-reference/assets-api/2021-06-09T11:39:03+01:00weekly0.5/developers/api-reference/attachments-api/2021-06-09T12:05:02+01:00weekly0.5/developers/api-reference/blobs-api/2021-06-09T13:32:57+01:00weekly0.5/developers/api-reference/compliance-api/2021-06-09T12:07:13+01:00weekly0.5/developers/api-reference/events-api/2021-06-09T11:48:40+01:00weekly0.5/developers/api-reference/iam-policies-api/2021-06-09T12:02:15+01:00weekly0.5/developers/api-reference/iam-subjects-api/2021-06-09T12:02:15+01:00weekly0.5/developers/api-reference/locations-api/2021-06-09T11:56:23+01:00weekly0.5/developers/api-reference/public-assets-api/2021-06-09T11:56:23+01:00weekly0.5/developers/api-reference/tenancies-api/2021-06-09T13:29:57+01:00weekly0.5/developers/api-reference/caps-api/2024-03-05T11:30:29+00:00weekly0.5
\ No newline at end of file
diff --git a/developers/yaml-reference/assets/index.html b/developers/yaml-reference/assets/index.html
index c218d4c66..e450ead6b 100644
--- a/developers/yaml-reference/assets/index.html
+++ b/developers/yaml-reference/assets/index.html
@@ -1,4 +1,4 @@
-Assets YAML Runner - DataTrails
+Assets YAML Runner - DataTrails
Adding an asset_label allows your Asset to be referenced in later steps of the story. For example, if you want to add a Compliance Policy for the Asset after it is created.
The arc_namespace (for the Asset) and the namespace (for the location) are used to distinguish between Assets and Locations created between runs of the story. Usually, these field values are derived from an environment variable ARCHIVIST_NAMESPACE (default value is namespace).
The optional confirm: true entry means that the YAML Runner will wait for the Asset to be committed before moving on to the next step. This is beneficial if the Asset will be referenced in later steps.
For example:
---steps:
@@ -83,4 +83,4 @@
description:Wait for all Assets in the wipp namespace to be confirmedattrs:arc_namespace:wipp
-
This action creates a Compliance Policy that assets may be tested against.
The specific fields required for creating Compliance Policies vary depending on the type of policy being used. Please see the
Compliance Policies section for details regarding Compliance Policy types and YAML Runner examples of each.
For example, a COMPLIANCE_RICHNESS policy that asserts radiation level must be less than 7:
---
@@ -29,4 +29,4 @@
description:Check Compliance of EV pump 1.report:trueasset_label:ev pump 1
-
\ No newline at end of file
diff --git a/developers/yaml-reference/estate-info/index.html b/developers/yaml-reference/estate-info/index.html
index 2ab9f44d8..244663f9e 100644
--- a/developers/yaml-reference/estate-info/index.html
+++ b/developers/yaml-reference/estate-info/index.html
@@ -1,4 +1,4 @@
-Estate Information YAML Runner - DataTrails
+Estate Information YAML Runner - DataTrails
The asset_label must match the setting when the Asset was created in an earlier step. The asset_label may also be specified as the Asset ID of an existing Asset, in the form assets/<asset-id>.
There are a few optional settings that can be used when creating Events. attachments uploads the attachment to DataTrails and the response is added to the Event before posting. location creates the location if it does not exist and adds it to the Event. The sbom setting uploads the SBOM to DataTrails and adds the response to the Event before posting.
confirm: true tells the YAML Runner to wait for the Event to be committed before moving to the next step.
This is optional and only necessary if your workflow requires 3rd parties (public or other DataTrails tenancies) to immediately view the Event.
@@ -89,4 +89,4 @@
arc_display_type:openasset_attrs:arc_display_type:door
-
This action checks to see if the location you are looking to create already exists, and if not, executes the creation of your new location. The action checks for a location with the same identifier to verify that the location does not already exist.
If this action is executed as part of a series of YAML Runner steps, the location created can be referenced in later steps using the key location_label.
When you create your location, you may also add location attributes. In the example below, information such as the facility address and type have been included, as well as contact information for the location’s reception:
---steps:
@@ -43,4 +43,4 @@
print_response:trueattrs:director:John Smith
-
Required for every operation, the action specifies what function will be performed.
description
Optional string that describes what the step is doing. For example, “Create the Asset My First Container”.
asset_label
For a series of steps run as one file, the Asset label could be a friendly name used by later steps to refer back to an Asset created in a previous step. If the Asset already exists, this field may be used to reference the Asset ID in the form assets/<asset-id>.
location_label
For a series of steps run as one file, the location label could be a friendly name used by later steps to refer back to a location created in a previous step. If the location already exists, this field may be used to reference the Location ID in the form locations/<location-id>.
subject_label
For a series of steps run as one file, the Subject label could be a friendly name used by later steps to refer back to a Subject created in a previous step. If the Subject already exists, this field may be used to reference the Subject ID in the form subjects/<subject-id>.
print_response
Specifying this field as true emits a JSON representation of the response, useful for debugging purposes.
wait_time
Optional field specifying a number of seconds the story runner will pause before executing the next step. Useful for demonstration and/or testing Compliance Policies.
Each step of the YAML Runner follows the same general pattern:
This action creates a Subject using their wallet_pub_key and tessera_pub_key. Adding a subject_label allows the Subject to be referenced in later YAML Runner steps.
a DataTrails Asset is an entry in your tenancy, which has a collection of attributes that describes its current state and a complete life history of Events
when dealing with Document profile Assets in DataTrails you can attach certain lifecycle stage metadata to them such as ‘Draft’, ‘Published’, or ‘Withdrawn’ in order to properly convey whether or not someone checking provenance of the document should rely on a particular version
events in DataTrails are labeled with a ’type’ that signify what kind of evidence they relate to, for instance a ‘Publish’ event on a document, or a ‘Shipping’ event on physical goods. Event types can be very useful for defining access control rules as well as filtering the audit trail for specific kinds of information
the Merkle log is the name for the verifiable data structure that is used by DataTrails to store the Event transaction data. It is append only and is based on a type of Merkle tree that is built from multiple massifs
As the massifs grow and multiply, the structure is called a Merkle Mountain Range (MMR) representing the multiple peaks. Its key characteristic is that previously added values, and also the organization of those values, does not change as new entries are appended to the log
tenancy name visible to others in place of the tenancy ID when viewing the Asset Overview of a public Asset or a shared private Asset. Must be verified by the DataTrails team
an organization which has paid to have their domain verified and displayed in place of their tenancy ID in Instaproof results and in the Asset Overview
when dealing with Document profile Assets in DataTrails you can differentiate ‘final’ or ‘published’ versions of a document from other provenance information such as reviews or downloads
a DataTrails Asset is an entry in your tenancy, which has a collection of attributes that describes its current state and a complete life history of Events
when dealing with Document profile Assets in DataTrails you can attach certain lifecycle stage metadata to them such as ‘Draft’, ‘Published’, or ‘Withdrawn’ in order to properly convey whether or not someone checking provenance of the document should rely on a particular version
events in DataTrails are labeled with a ’type’ that signify what kind of evidence they relate to, for instance a ‘Publish’ event on a document, or a ‘Shipping’ event on physical goods. Event types can be very useful for defining access control rules as well as filtering the audit trail for specific kinds of information
the Merkle log is the name for the verifiable data structure that is used by DataTrails to store the Event transaction data. It is append only and is based on a type of Merkle tree that is built from multiple massifs
As the massifs grow and multiply, the structure is called a Merkle Mountain Range (MMR) representing the multiple peaks. Its key characteristic is that previously added values, and also the organization of those values, does not change as new entries are appended to the log
tenancy name visible to others in place of the tenancy ID when viewing the Asset Overview of a public Asset or a shared private Asset. Must be verified by the DataTrails team
an organization which has paid to have their domain verified and displayed in place of their tenancy ID in Instaproof results and in the Asset Overview
when dealing with Document profile Assets in DataTrails you can differentiate ‘final’ or ‘published’ versions of a document from other provenance information such as reviews or downloads
Reserved attributes are asset attributes that are used by the DataTrails platform and have a specific purpose. All reserved attributes have the arc_ prefix.
Select an attribute to see an example of it in use.
Reserved attributes are asset attributes that are used by the DataTrails platform and have a specific purpose. All reserved attributes have the arc_ prefix.
Select an attribute to see an example of it in use.
\ No newline at end of file
diff --git a/index.html b/index.html
index b7a18ee0b..043945189 100644
--- a/index.html
+++ b/index.html
@@ -1,8 +1,8 @@
-DataTrails - Provenance as a Service to boost confidence in digital decisions.
+DataTrails - Provenance as a Service to boost confidence in digital decisions.
\ No newline at end of file
diff --git a/index.min.2169bac53c9501da53226a5d8f31eb46176aa81bff8bf89abfd0f8819a23ba785de12b881051f4996d03f209837ad7ff3c681b5ce69d43fd1388c69fec54c5d0.js b/index.min.f27b568387425eb7833c7f0b9d85322b100b3157a6c9b93ffa6ed20a099de6b6434ed85c8f0b3a8ebff7d8f2624c2a42475c47e87b0ed2e46d307ec3025c1348.js
similarity index 97%
rename from index.min.2169bac53c9501da53226a5d8f31eb46176aa81bff8bf89abfd0f8819a23ba785de12b881051f4996d03f209837ad7ff3c681b5ce69d43fd1388c69fec54c5d0.js
rename to index.min.f27b568387425eb7833c7f0b9d85322b100b3157a6c9b93ffa6ed20a099de6b6434ed85c8f0b3a8ebff7d8f2624c2a42475c47e87b0ed2e46d307ec3025c1348.js
index d6845f842..b5febc802 100644
--- a/index.min.2169bac53c9501da53226a5d8f31eb46176aa81bff8bf89abfd0f8819a23ba785de12b881051f4996d03f209837ad7ff3c681b5ce69d43fd1388c69fec54c5d0.js
+++ b/index.min.f27b568387425eb7833c7f0b9d85322b100b3157a6c9b93ffa6ed20a099de6b6434ed85c8f0b3a8ebff7d8f2624c2a42475c47e87b0ed2e46d307ec3025c1348.js
@@ -7115,7 +7115,336 @@ By comparison, our Administrator, Jill, can see the full details of the Asset:
IAM Policies API Reference.
-`},{id:21,href:"https://docs.datatrails.ai/platform/administration/dropbox-integration/",title:"Dropbox Integration",description:"Integrating with Dropbox",content:`
The Dropbox Integration
+`},{id:21,href:"https://docs.datatrails.ai/developers/developer-patterns/3rdparty-verification/",title:"Verified Replication of the Datatrails Transparency Logs",description:"Supporting verified replication of DataTrails merkle logs",content:`
Introduction
+
Without the measures described in this article, it is still extremely challenging to compromise a transparency solution based on DataTrails.
+
To do so, the systems of more than just DataTrails need to be compromised in very specific ways.
+To illustrate this, consider this typical flow for how Data can be used in a transparent and tamper evident way with DataTrails.
This is already a very robust process. For this process to fail, the following steps must all be accomplished:
+
+
The source of the Data, which may not be the Owner, must be compromised to substitute the malicious Data.
+
Owner authentication of the Data, such as adding a signed digest in the metadata, must be compromised.
+
The DataTrails SaaS database must be compromised.
+
The DataTrails ledger must be compromised and re-built and re-signed.
+
+
Executing such an attack successfully would require significant effort and infiltration of both the Data source and DataTrails.
+Nonetheless, for use-cases where even this small degree of trust in DataTrails is un-acceptable, the recipes in this article ensure the following guarantees are fully independent of DataTrails:
+
+
The guarantee of non-falsifiability: Event data can not be falsified.
+
The guarantee of non-repudiation: Event data can not be removed from the record (ie ‘shredded’ or deleted).
+
The guarantee of provability: Event data held here and now can be proven to be identical to the data created there and then (creating these proofs does not require the original event data).
+
The guarantee of demonstrable completeness: Series of events (trails), can be proven to be complete with no gaps or omissions.
+
+
These guarantees are “fail safe” against regular data corruption of the log data.
+In the event of individual log entry corruption, verification checks would fail for that entry.
+
All modifications to the ledger which result in provable changes can be detected without a fully auditable replica.
+By maintaining a fully auditable replica, continued verifiable operation is possible even if DataTrails is prevented from operating.
+To provide this capability, checking that all metadata is exactly as was originally recorded, A copy of the metadata must also be replicated.
+In cases where this capability is required, data retention remains manageable and has completely predictable storage requirements.
+The log format makes it operational very simple to discard data that ceases to be interesting.
+
+
The metadata is returned to the Owner when the event is recorded and is available from the regular API endpoints to any other authorized party.
+Obtaining the returned metadata is not covered in this article.
+
+
Replication Recipes
+
Environment Configuration for Veracity
+
The following recipes make use of these environment:
+
# DataTrails Public Tenant
+exportPUBLIC_TENANT="tenant/6ea5cd00-c711-3649-6914-7b125928bbb4"
+
+# Synsation Demo Tenant
+# Replace TENANT with your Tenant ID to view your Tenant logs and events
+exportTENANT="tenant/6a009b40-eb55-4159-81f0-69024f89f53c"
+
Maintaining a Tamper Evident Log Replica
+
Based on a window of assurance, a replica may be maintained with one command, once a week.
+
A guarantee that actions are only taken on verified data can be achieved by running the following command once a week:
A sensible value for --horizon is just a little (hours is more than enough) longer than the interval between updates.
+To miss an update for a tenant, more than 16,000 events would need to be recorded in the interval.
+
+
Larger time horizons may trigger rate limiting
+
+
Initializing a Replica for All Tenants
+
If a replica of all DataTrails tenants is required, run the previous command with a very long horizon.
Having done this once, you should revert to using a horizon that is just a little longer than your update interval.
+
Limiting the Replica to Specific Tenants
+
The previous command will replicate the logs of all tenants.
+This requires about 3.5 megabytes per 16,000 events.
+
To restrict a replica to a specific set of tenants, specify those tenants to the watch command.
+
A common requirement is the public attestation tenant and your own tenant, to accomplish this set $TENANT accordingly and run the following once a week.
To initialize the replica, run the same command once but using an appropriately large --horizon
+
The remainder of this article discusses the commands replicate-logs and watch in more depth, covering how to replicate selective tenants, explaining the significance of the replicated materials.
+
How Veracity Supports Integrity and Inclusion Protection
+
DataTrail’s log format makes it simple to retain only the portions (massifs) of the log that are interesting.
+Discarding un-interesting portions does not affect the independence or verifiability of the retained log.
This diagram illustrates the logical flow when updating a local replica using veracity.
+
+
+ ---
+config:
+ theme: classic
+---
+sequenceDiagram
+ actor v as Verifier
+ box Runs locally to the verifier
+ participant V as Veracity
+ participant R as Replica
+ end
+ participant D as DataTrails
+
+ v -->> V: Safely update my replica to massif X please
+ V ->> D: Fetch and verify the remote massifs and seals up to X
+ V ->> R: Check the verified remote data is consistent with the replica
+ V ->> R: Update the replica with verified additions
+ V -->> v: All OK!
+
+
+
+
For the guarantees of non-falsifiability and non-repudiation to be independent of DataTrails, replication and verification of at least the most recently updated massif is necessary.
+The replica must be updated often enough to capture all massifs.
+As a massif, in the default tenant configuration, contains over 16,000 events, the frequency necessary to support this guarantee is both low, and completely determined by the specific tenant of interest.
+
Massifs verifying events that are no longer interesting can be safely discarded.
+Remembering that the order that events were recorded matches the order of data in the log, it is usually the case that all massifs before a certain point can be discarded together.
+
Saving the API response data when events are recoded, or obtaining the metadata using the DataTrails events API is additionally required in order to support a full audit for data corruption.
+
When a a trusted local copy of the verifiable log is included in the “verify before use” process, it is reasonable to rely on DataTrails storage of the metadata.
+If the DataTrails storage of the metadata is changed, the verification will “fail safe” against the local replicated log because the changed data will not verify against the local replica.
+While this is a “false negative”, it ensures safety in the face of accidental or malicious damage to the DataTrails storage systems without the burden of maintaining copies of the metadata recorded in DataTrails.
+Once the unsafe action is blocked, it is very use-case dependent what the appropriate next steps are. The common thread is that is critical that the action must be blocked in the first instance.
+
When the metadata is fetched, if it can be verified against the log replica, it proves that the DataTrails storage remains correct.
+If it does not verify, it is proven that the metadata held by DataTrails is incorrect, though the Data being processed by the Consumer may still be correct and safe.
+
The veracityreplicate-logs and watch are used to maintain the replica of the verifiable log.
+
+
veracity watch is used to give notice of which tenants have updates to their logs that need to be considered for replication.
+
veracity replicate-logs performs the activities in the diagram above. It can be directed to examine a specific tenant, or it can be provided with the output of veracity watch
+
+
Updating the Currently Open Massif
+
Every DataTrails log is a series of one or more massifs.
+The last, called the head, is where verification data for new events are recorded.
+Once the head is full, a new head automatically starts.
+
This means there are 3 basic scenarios veracity copes with when updating a replica.
+
+
Updating the currently open replicated massif with the new additions in the DataTrails open massif.
+
Replicating the start of a new open massif from DataTrails.
+
Replicating a limited number of new massifs from DataTrails, performing local consistency checks only if the replicated massifs follow the latest local massif.
+
+
The first is the simplest to understand. In the diagram below the dashed boxes correspond to the open massifs.
+
The local replica of the open massif will always be equal or less in size than the remote.
+Once veracity verifies the remote copy is consistent with the remote seal, it will then check the new data copied from the remote is consistent with its local copy of the open massif.
+Consistent simply means it is an append, and that the remote has not “dropped” anything that it contained the last time it was replicated.
+
If there is any discrepancy in any of these checks, the current local data is left unchanged.
The local replica starts out only having Massifs 0 & 1.
+And 1 happens to be complete.
+On the next event recorded by DataTrails, a new remote massif, Massif 2, is created.
+More events may be recorded before the replica is updated.
+Each massif contains verification data for a little over 16,000 events.
+Provided the replication commands are run before Massif 2 is also filled, we are dealing with this case.
+
The local Massif 1 is read because, before copying the remote Massif 2 into the local replica, its consistency against both the remote seal and the previous local massif, Massif 1, are checked.
+
Once those checks are successfully made, the local replica gains its initial copy of Massif 2.
+
+
+
+
+
+
+ Replicating The Next Open Massif with Veracity
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Replicating, but Leaving a Gap
+
By default, veracity will fetch and verify all massifs, up to the requested, that follow on immediately after the most recent local massif.
+In this case, where we request --massif 4 the default would be to fetch, verify and replicate Massifs 2, 3 & 4.
+
By default, a full tenant log is replicated.
+The storage requirements are roughly 4mb per massif, and each massif has the verification data for about 16,000 events.
+
To provide a means to bound the size of the local replica and also to bound the amount of work, we provide the --ancestors option.
+This specifies a fixed limit on the number of massifs that will be fetched.
+In this example, the limit is 0, meaning massif 4 is fetched and verified, and we leave a gap between the local massifs 2 & the new local massif 4.
+The gap means the consistency of the remote massif 4 is not checked against the local replica.
+
The command veracity replicate-logs --ancestors 0 --massif 4 requests that massif 4 is verified and then replicated locally, but prevents it from being verified for consistency against the current local replica.
+
+
+
+
+
+
+ Replicating The With Gaps
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Replicating the Log for the Public Tenant
+
For illustration, we take a more detailed look at using watch and replicate-logs to replicate the public tenant verifiable log data.
There has been no activity in any tenant for the default watch horizon (how far back we look for changes).
+
To set an explicit, and in this example very large, horizon try the following:
+
veracity watch --horizon 10000h
+
+
+
The watch command is used to determine the massifindex, even when you are only interested in a single tenant.
+You then provide that index to the replicate-logs command using the --massif option:
By default, all massifs up to and including the massif specified by --massif <N> are verified remotely and checked for consistency against the local replica (following the logical steps in the diagram above).
+
The numbered .log files are the verifiable data for your log.
+
The .sth files are
+COSE Sign1 binary format signed messages.
+Each .sth is associated with the identically numbered massif.
+The log root material in the .sth signature attests to the entire state of the log up to the end of the associated massif.
+The details of consuming the binary format of the seal and verifying the signature are beyond the scope of this article.
+
However, the implementation used by veracity can be found in the open source merkle log library maintained by DataTrails
+go-datatrails-merklelog
+
Takeaways
+
+
To be sure mistaken, or malicious, changes to DataTrails data stores can always be detected run this command about once a week:
+veracity --tenant $TENANT watch --horizon 180h | veracity replicate-logs --replicadir merklelogs
+
This process guarantees you can’t be misrepresented, any alternate version of events would be provably false.
+
To guarantee continued operation even if DataTrails is prevented from operating, a copy of the DataTrails metadata must be retained.
+
You can reasonably chose to trust DataTrails copy, because, even in the most extreme cases, it is “fail-safe” if DataTrails SaaS storage is compromised, when combined with a replicated verifiable merkle log.
+
+`},{id:22,href:"https://docs.datatrails.ai/platform/administration/dropbox-integration/",title:"Dropbox Integration",description:"Integrating with Dropbox",content:`
The Dropbox Integration
Connecting your DataTrails tenancy to your Dropbox account will allow you to automatically record and maintain the provenance metadata of your files in an immutable Audit Trail.
DataTrails uses transparent and auditable distributed ledger technology to maintain an immutable trail of provenance metadata independent of, but in concert with, the original file in Dropbox.
The original data never enters the DataTrails system and remains on Dropbox.
@@ -7470,7 +7799,7 @@ You would disconnect in Dropbox if you no longer wish to use DataTrails for prov
This is how to connect and disconnect DataTrails and Dropbox, it is that simple! Please see our
FAQ for more information.
-`},{id:22,href:"https://docs.datatrails.ai/platform/administration/compliance-policies/",title:"Compliance Policies",description:"Creating and Managing Compliance Policies",content:`
Creating a Compliance Policy
+`},{id:23,href:"https://docs.datatrails.ai/platform/administration/compliance-policies/",title:"Compliance Policies",description:"Creating and Managing Compliance Policies",content:`
Creating a Compliance Policy
Compliance Policies are user-defined rule sets that Assets can be tested against. Compliance Policies only need to be created once; all applicable Assets will be tested against that policy thereafter.
For example, a policy might assert that “Maintenance Alarm Events must be addressed by a Maintenance Report Event, recorded within 72 hours of the alarm”. This creates a Compliance Policy in the system which any Asset can be tested against as needed.
As compliance is ensured by a regular series of Events, an Audit Trail builds up over time that allows compliance to be checked for the entire lifetime of the Asset.
@@ -7827,7 +8156,7 @@ An example response for a non-compliant Asset
"next_page_token": "",
"compliant_at": "2024-01-17T10:16:12Z"}
-`},{id:23,href:"https://docs.datatrails.ai/platform/administration/grouping-assets-by-location/",title:"Grouping Assets by Location",description:"Adding a Location",content:`
Locations associate an Asset with a ‘home’ that can help when governing sharing policies with OBAC and ABAC. Locations do not need pinpoint precision and can be named by site, building, or other logical grouping.
+`},{id:24,href:"https://docs.datatrails.ai/platform/administration/grouping-assets-by-location/",title:"Grouping Assets by Location",description:"Adding a Location",content:`
Locations associate an Asset with a ‘home’ that can help when governing sharing policies with OBAC and ABAC. Locations do not need pinpoint precision and can be named by site, building, or other logical grouping.
It may be useful to indicate an Asset’s origin. For example, if tracking traveling consultant’s laptops, you may wish to associate them with a ‘home’ office.
Caution: It is important to recognize that the location does not necessarily denote the Asset’s current position in space; it simply determines which facility the Asset belongs to. For things that move around, use GIS coordinates on Events instead. See
@@ -8393,7 +8722,7 @@ For more information on creating Events, please visit
-`},{id:24,href:"https://docs.datatrails.ai/developers/api-reference/app-registrations-api/",title:"App Registrations API",description:"App Registrations API Reference",content:`
+`},{id:25,href:"https://docs.datatrails.ai/developers/api-reference/app-registrations-api/",title:"App Registrations API",description:"App Registrations API Reference",content:`
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
Postman collection, the
@@ -8762,6 +9091,16 @@ If you are looking for a simple way to test our API you might prefer our
Human-readable display name for this Application.
+
+
roles
+
array
+
+
+
+
+
+
+
@@ -8851,6 +9190,16 @@ If you are looking for a simple way to test our API you might prefer our
Resource name for the application
+
+
roles
+
array
+
+
+
+
+
+
+
tenant_id
string
@@ -9075,6 +9424,16 @@ If you are looking for a simple way to test our API you might prefer our
Resource name for the application
+
+
roles
+
array
+
+
+
+
+
+
+
tenant_id
string
@@ -9235,6 +9594,16 @@ If you are looking for a simple way to test our API you might prefer our
Resource name for the application
+
+
roles
+
array
+
+
+
+
+
+
+
tenant_id
string
@@ -9393,6 +9762,16 @@ If you are looking for a simple way to test our API you might prefer our
Resource name for the application
+
+
roles
+
array
+
+
+
+
+
+
+
tenant_id
string
@@ -9449,7 +9828,7 @@ If you are looking for a simple way to test our API you might prefer our
-`},{id:25,href:"https://docs.datatrails.ai/developers/api-reference/assets-api/",title:"Assets API",description:"Assets API Reference",content:`
+`},{id:26,href:"https://docs.datatrails.ai/developers/api-reference/assets-api/",title:"Assets API",description:"Assets API Reference",content:`
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
Postman collection, the
@@ -11499,7 +11878,7 @@ If you are looking for a simple way to test our API you might prefer our
-`},{id:26,href:"https://docs.datatrails.ai/developers/api-reference/attachments-api/",title:"Attachments API",description:"Attachments API Reference",content:`
+`},{id:27,href:"https://docs.datatrails.ai/developers/api-reference/attachments-api/",title:"Attachments API",description:"Attachments API Reference",content:`
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
Postman collection, the
@@ -12620,7 +12999,7 @@ If you are looking for a simple way to test our API you might prefer our
-`},{id:27,href:"https://docs.datatrails.ai/developers/api-reference/blobs-api/",title:"Blobs API",description:"Blobs API Reference",content:`
+`},{id:28,href:"https://docs.datatrails.ai/developers/api-reference/blobs-api/",title:"Blobs API",description:"Blobs API Reference",content:`
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
Postman collection, the
@@ -13167,7 +13546,7 @@ For information on Attachments and how to implement them, please refer to
-`},{id:28,href:"https://docs.datatrails.ai/developers/api-reference/compliance-api/",title:"Compliance API",description:"Compliance API Reference",content:`
+`},{id:29,href:"https://docs.datatrails.ai/developers/api-reference/compliance-api/",title:"Compliance API",description:"Compliance API Reference",content:`
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
Postman collection, the
@@ -14479,7 +14858,7 @@ If you are looking for a simple way to test our API you might prefer our
-`},{id:29,href:"https://docs.datatrails.ai/developers/api-reference/events-api/",title:"Events API",description:"Events API Reference",content:`
+`},{id:30,href:"https://docs.datatrails.ai/developers/api-reference/events-api/",title:"Events API",description:"Events API Reference",content:`
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
Postman collection, the
@@ -16605,7 +16984,7 @@ For example:
-`},{id:30,href:"https://docs.datatrails.ai/developers/api-reference/iam-policies-api/",title:"IAM Policies API",description:"IAM Policies API Reference",content:`
+`},{id:31,href:"https://docs.datatrails.ai/developers/api-reference/iam-policies-api/",title:"IAM Policies API",description:"IAM Policies API Reference",content:`
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
Postman collection, the
@@ -18298,7 +18677,7 @@ If you are looking for a simple way to test our API you might prefer our
-`},{id:31,href:"https://docs.datatrails.ai/developers/api-reference/iam-subjects-api/",title:"IAM Subjects API",description:"IAM Subjects API Reference",content:`
+`},{id:32,href:"https://docs.datatrails.ai/developers/api-reference/iam-subjects-api/",title:"IAM Subjects API",description:"IAM Subjects API Reference",content:`
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
Postman collection, the
@@ -19215,7 +19594,7 @@ If you are looking for a simple way to test our API you might prefer our
-`},{id:32,href:"https://docs.datatrails.ai/developers/developer-patterns/scitt-api/",title:"Quickstart: SCITT Statements (Preview)",description:"Getting Started with SCITT: creating a collection of statements (Preview)",content:`
+`},{id:33,href:"https://docs.datatrails.ai/developers/developer-patterns/scitt-api/",title:"Quickstart: SCITT Statements (Preview)",description:"Getting Started with SCITT: creating a collection of statements (Preview)",content:`
The SCITT API is currently in preview and subject to change
The Supply Chain Integrity, Transparency and Trust (SCITT) initiative is a set of
@@ -19345,7 +19724,7 @@ By using the content-type parameter, verifiers can filter to specific types, fil
-`},{id:33,href:"https://docs.datatrails.ai/developers/api-reference/locations-api/",title:"Locations API",description:"Locations API Reference",content:`
+`},{id:34,href:"https://docs.datatrails.ai/developers/api-reference/locations-api/",title:"Locations API",description:"Locations API Reference",content:`
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
Postman collection, the
@@ -20490,7 +20869,7 @@ If you are looking for a simple way to test our API you might prefer our
-`},{id:34,href:"https://docs.datatrails.ai/developers/api-reference/public-assets-api/",title:"Public Assets API",description:"Public Assets API Reference",content:`
+`},{id:35,href:"https://docs.datatrails.ai/developers/api-reference/public-assets-api/",title:"Public Assets API",description:"Public Assets API Reference",content:`
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
Postman collection, the
@@ -21332,7 +21711,7 @@ If you are looking for a simple way to test our API you might prefer our
-`},{id:35,href:"https://docs.datatrails.ai/developers/api-reference/tenancies-api/",title:"Tenancies API",description:"Tenancies API Reference",content:`
+`},{id:36,href:"https://docs.datatrails.ai/developers/api-reference/tenancies-api/",title:"Tenancies API",description:"Tenancies API Reference",content:`
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
Postman collection, the
@@ -22467,7 +22846,7 @@ If you are looking for a simple way to test our API you might prefer our
-`},{id:36,href:"https://docs.datatrails.ai/developers/yaml-reference/story-runner-components/",title:"YAML Runner Components",description:"Common Keys Used for the Yaml Runner",content:`
+`},{id:37,href:"https://docs.datatrails.ai/developers/yaml-reference/story-runner-components/",title:"YAML Runner Components",description:"Common Keys Used for the Yaml Runner",content:`
Note: To use the YAML Runner you will need to install the datatrails-archivist python package.
@@ -22529,7 +22908,7 @@ If you are looking for a simple way to test our API you might prefer our
--client-id <your-client-id> \\
--client-secret <your-client-secret> \\
<path-to-yaml-file>
-
`},{id:37,href:"https://docs.datatrails.ai/developers/yaml-reference/assets/",title:"Assets YAML Runner",description:"Asset Actions Used with the Yaml Runner",content:`
+`},{id:38,href:"https://docs.datatrails.ai/developers/yaml-reference/assets/",title:"Assets YAML Runner",description:"Asset Actions Used with the Yaml Runner",content:`
Note: To use the YAML Runner you will need to install the datatrails-archivist python package.
@@ -22651,7 +23030,7 @@ If this is not needed then do not wait for confirmation.
description:Wait for all Assets in the wipp namespace to be confirmedattrs:arc_namespace:wipp
-
`},{id:38,href:"https://docs.datatrails.ai/developers/yaml-reference/events/",title:"Events YAML Runner",description:"Event Actions Used with the Yaml Runner",content:`
+`},{id:39,href:"https://docs.datatrails.ai/developers/yaml-reference/events/",title:"Events YAML Runner",description:"Event Actions Used with the Yaml Runner",content:`
Note: To use the YAML Runner you will need to install the datatrails-archivist python package.
@@ -22753,7 +23132,7 @@ If this is not needed then do not wait for confirmation.
arc_display_type:openasset_attrs:arc_display_type:door
-
`},{id:39,href:"https://docs.datatrails.ai/developers/yaml-reference/locations/",title:"Locations YAML Runner",description:"Location Actions Used with the Yaml Runner",content:`
+`},{id:40,href:"https://docs.datatrails.ai/developers/yaml-reference/locations/",title:"Locations YAML Runner",description:"Location Actions Used with the Yaml Runner",content:`
Note: To use the YAML Runner you will need to install the datatrails-archivist python package.
@@ -22802,7 +23181,7 @@ If this is not needed then do not wait for confirmation.
print_response:trueattrs:director:John Smith
-
`},{id:40,href:"https://docs.datatrails.ai/developers/yaml-reference/subjects/",title:"Subjects YAML Runner",description:"Subject Actions Used with the Yaml Runner",content:`
+`},{id:41,href:"https://docs.datatrails.ai/developers/yaml-reference/subjects/",title:"Subjects YAML Runner",description:"Subject Actions Used with the Yaml Runner",content:`
Note: To use the YAML Runner you will need to install the datatrails-archivist python package.
@@ -22912,7 +23291,7 @@ If this is not needed then do not wait for confirmation.
print_response:truesubject_label:A subject\`\`
-
`},{id:41,href:"https://docs.datatrails.ai/developers/yaml-reference/compliance/",title:"Compliance Policies YAML Runner",description:"Compliance Policy Actions Used with the Yaml Runner",content:`
+`},{id:42,href:"https://docs.datatrails.ai/developers/yaml-reference/compliance/",title:"Compliance Policies YAML Runner",description:"Compliance Policy Actions Used with the Yaml Runner",content:`
Note: To use the YAML Runner you will need to install the datatrails-archivist python package.
@@ -22946,7 +23325,7 @@ If this is not needed then do not wait for confirmation.
description:Check Compliance of EV pump 1.report:trueasset_label:ev pump 1
-
`},{id:42,href:"https://docs.datatrails.ai/developers/yaml-reference/estate-info/",title:"Estate Information YAML Runner",description:"Retrieve Estate Info Using the Yaml Runner",content:`
+`},{id:43,href:"https://docs.datatrails.ai/developers/yaml-reference/estate-info/",title:"Estate Information YAML Runner",description:"Retrieve Estate Info Using the Yaml Runner",content:`
Note: To use the YAML Runner you will need to install the datatrails-archivist python package.
This sub-section of the Developers subject area contains more detailed information on topics that cannot be covered by the API or YAML Runner references.
@@ -22975,7 +23354,7 @@ If this is not needed then do not wait for confirmation.
Software Package Profile →
-`},{id:44,href:"https://docs.datatrails.ai/developers/api-reference/caps-api/",title:"Caps API",description:"Caps API Reference",content:`
+`},{id:45,href:"https://docs.datatrails.ai/developers/api-reference/caps-api/",title:"Caps API",description:"Caps API Reference",content:`
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
Postman collection, the
@@ -23093,7 +23472,7 @@ If you are looking for a simple way to test our API you might prefer our
If you are a developer who is looking to easily add provenance to their data, this section is for you.
@@ -23184,7 +23563,7 @@ If you are looking for a simple way to test our API you might prefer our
-`},{id:50,href:"https://docs.datatrails.ai/platform/",title:"Platform",description:"DataTrails Platform and configuration documentation",content:`
+`},{id:51,href:"https://docs.datatrails.ai/platform/",title:"Platform",description:"DataTrails Platform and configuration documentation",content:`
Platform
If you are new to DataTrails, this is the place to start.
@@ -30319,7 +30698,336 @@ By comparison, our Administrator, Jill, can see the full details of the Asset:
IAM Policies API Reference.
-`}).add({id:21,href:"https://docs.datatrails.ai/platform/administration/dropbox-integration/",title:"Dropbox Integration",description:"Integrating with Dropbox",content:`
The Dropbox Integration
+`}).add({id:21,href:"https://docs.datatrails.ai/developers/developer-patterns/3rdparty-verification/",title:"Verified Replication of the Datatrails Transparency Logs",description:"Supporting verified replication of DataTrails merkle logs",content:`
Introduction
+
Without the measures described in this article, it is still extremely challenging to compromise a transparency solution based on DataTrails.
+
To do so, the systems of more than just DataTrails need to be compromised in very specific ways.
+To illustrate this, consider this typical flow for how Data can be used in a transparent and tamper evident way with DataTrails.
This is already a very robust process. For this process to fail, the following steps must all be accomplished:
+
+
The source of the Data, which may not be the Owner, must be compromised to substitute the malicious Data.
+
Owner authentication of the Data, such as adding a signed digest in the metadata, must be compromised.
+
The DataTrails SaaS database must be compromised.
+
The DataTrails ledger must be compromised and re-built and re-signed.
+
+
Executing such an attack successfully would require significant effort and infiltration of both the Data source and DataTrails.
+Nonetheless, for use-cases where even this small degree of trust in DataTrails is un-acceptable, the recipes in this article ensure the following guarantees are fully independent of DataTrails:
+
+
The guarantee of non-falsifiability: Event data can not be falsified.
+
The guarantee of non-repudiation: Event data can not be removed from the record (ie ‘shredded’ or deleted).
+
The guarantee of provability: Event data held here and now can be proven to be identical to the data created there and then (creating these proofs does not require the original event data).
+
The guarantee of demonstrable completeness: Series of events (trails), can be proven to be complete with no gaps or omissions.
+
+
These guarantees are “fail safe” against regular data corruption of the log data.
+In the event of individual log entry corruption, verification checks would fail for that entry.
+
All modifications to the ledger which result in provable changes can be detected without a fully auditable replica.
+By maintaining a fully auditable replica, continued verifiable operation is possible even if DataTrails is prevented from operating.
+To provide this capability, checking that all metadata is exactly as was originally recorded, A copy of the metadata must also be replicated.
+In cases where this capability is required, data retention remains manageable and has completely predictable storage requirements.
+The log format makes it operational very simple to discard data that ceases to be interesting.
+
+
The metadata is returned to the Owner when the event is recorded and is available from the regular API endpoints to any other authorized party.
+Obtaining the returned metadata is not covered in this article.
+
+
Replication Recipes
+
Environment Configuration for Veracity
+
The following recipes make use of these environment:
+
# DataTrails Public Tenant
+exportPUBLIC_TENANT="tenant/6ea5cd00-c711-3649-6914-7b125928bbb4"
+
+# Synsation Demo Tenant
+# Replace TENANT with your Tenant ID to view your Tenant logs and events
+exportTENANT="tenant/6a009b40-eb55-4159-81f0-69024f89f53c"
+
Maintaining a Tamper Evident Log Replica
+
Based on a window of assurance, a replica may be maintained with one command, once a week.
+
A guarantee that actions are only taken on verified data can be achieved by running the following command once a week:
A sensible value for --horizon is just a little (hours is more than enough) longer than the interval between updates.
+To miss an update for a tenant, more than 16,000 events would need to be recorded in the interval.
+
+
Larger time horizons may trigger rate limiting
+
+
Initializing a Replica for All Tenants
+
If a replica of all DataTrails tenants is required, run the previous command with a very long horizon.
Having done this once, you should revert to using a horizon that is just a little longer than your update interval.
+
Limiting the Replica to Specific Tenants
+
The previous command will replicate the logs of all tenants.
+This requires about 3.5 megabytes per 16,000 events.
+
To restrict a replica to a specific set of tenants, specify those tenants to the watch command.
+
A common requirement is the public attestation tenant and your own tenant, to accomplish this set $TENANT accordingly and run the following once a week.
To initialize the replica, run the same command once but using an appropriately large --horizon
+
The remainder of this article discusses the commands replicate-logs and watch in more depth, covering how to replicate selective tenants, explaining the significance of the replicated materials.
+
How Veracity Supports Integrity and Inclusion Protection
+
DataTrail’s log format makes it simple to retain only the portions (massifs) of the log that are interesting.
+Discarding un-interesting portions does not affect the independence or verifiability of the retained log.
This diagram illustrates the logical flow when updating a local replica using veracity.
+
+
+ ---
+config:
+ theme: classic
+---
+sequenceDiagram
+ actor v as Verifier
+ box Runs locally to the verifier
+ participant V as Veracity
+ participant R as Replica
+ end
+ participant D as DataTrails
+
+ v -->> V: Safely update my replica to massif X please
+ V ->> D: Fetch and verify the remote massifs and seals up to X
+ V ->> R: Check the verified remote data is consistent with the replica
+ V ->> R: Update the replica with verified additions
+ V -->> v: All OK!
+
+
+
+
For the guarantees of non-falsifiability and non-repudiation to be independent of DataTrails, replication and verification of at least the most recently updated massif is necessary.
+The replica must be updated often enough to capture all massifs.
+As a massif, in the default tenant configuration, contains over 16,000 events, the frequency necessary to support this guarantee is both low, and completely determined by the specific tenant of interest.
+
Massifs verifying events that are no longer interesting can be safely discarded.
+Remembering that the order that events were recorded matches the order of data in the log, it is usually the case that all massifs before a certain point can be discarded together.
+
Saving the API response data when events are recoded, or obtaining the metadata using the DataTrails events API is additionally required in order to support a full audit for data corruption.
+
When a a trusted local copy of the verifiable log is included in the “verify before use” process, it is reasonable to rely on DataTrails storage of the metadata.
+If the DataTrails storage of the metadata is changed, the verification will “fail safe” against the local replicated log because the changed data will not verify against the local replica.
+While this is a “false negative”, it ensures safety in the face of accidental or malicious damage to the DataTrails storage systems without the burden of maintaining copies of the metadata recorded in DataTrails.
+Once the unsafe action is blocked, it is very use-case dependent what the appropriate next steps are. The common thread is that is critical that the action must be blocked in the first instance.
+
When the metadata is fetched, if it can be verified against the log replica, it proves that the DataTrails storage remains correct.
+If it does not verify, it is proven that the metadata held by DataTrails is incorrect, though the Data being processed by the Consumer may still be correct and safe.
+
The veracityreplicate-logs and watch are used to maintain the replica of the verifiable log.
+
+
veracity watch is used to give notice of which tenants have updates to their logs that need to be considered for replication.
+
veracity replicate-logs performs the activities in the diagram above. It can be directed to examine a specific tenant, or it can be provided with the output of veracity watch
+
+
Updating the Currently Open Massif
+
Every DataTrails log is a series of one or more massifs.
+The last, called the head, is where verification data for new events are recorded.
+Once the head is full, a new head automatically starts.
+
This means there are 3 basic scenarios veracity copes with when updating a replica.
+
+
Updating the currently open replicated massif with the new additions in the DataTrails open massif.
+
Replicating the start of a new open massif from DataTrails.
+
Replicating a limited number of new massifs from DataTrails, performing local consistency checks only if the replicated massifs follow the latest local massif.
+
+
The first is the simplest to understand. In the diagram below the dashed boxes correspond to the open massifs.
+
The local replica of the open massif will always be equal or less in size than the remote.
+Once veracity verifies the remote copy is consistent with the remote seal, it will then check the new data copied from the remote is consistent with its local copy of the open massif.
+Consistent simply means it is an append, and that the remote has not “dropped” anything that it contained the last time it was replicated.
+
If there is any discrepancy in any of these checks, the current local data is left unchanged.
The local replica starts out only having Massifs 0 & 1.
+And 1 happens to be complete.
+On the next event recorded by DataTrails, a new remote massif, Massif 2, is created.
+More events may be recorded before the replica is updated.
+Each massif contains verification data for a little over 16,000 events.
+Provided the replication commands are run before Massif 2 is also filled, we are dealing with this case.
+
The local Massif 1 is read because, before copying the remote Massif 2 into the local replica, its consistency against both the remote seal and the previous local massif, Massif 1, are checked.
+
Once those checks are successfully made, the local replica gains its initial copy of Massif 2.
+
+
+
+
+
+
+ Replicating The Next Open Massif with Veracity
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Replicating, but Leaving a Gap
+
By default, veracity will fetch and verify all massifs, up to the requested, that follow on immediately after the most recent local massif.
+In this case, where we request --massif 4 the default would be to fetch, verify and replicate Massifs 2, 3 & 4.
+
By default, a full tenant log is replicated.
+The storage requirements are roughly 4mb per massif, and each massif has the verification data for about 16,000 events.
+
To provide a means to bound the size of the local replica and also to bound the amount of work, we provide the --ancestors option.
+This specifies a fixed limit on the number of massifs that will be fetched.
+In this example, the limit is 0, meaning massif 4 is fetched and verified, and we leave a gap between the local massifs 2 & the new local massif 4.
+The gap means the consistency of the remote massif 4 is not checked against the local replica.
+
The command veracity replicate-logs --ancestors 0 --massif 4 requests that massif 4 is verified and then replicated locally, but prevents it from being verified for consistency against the current local replica.
+
+
+
+
+
+
+ Replicating The With Gaps
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Replicating the Log for the Public Tenant
+
For illustration, we take a more detailed look at using watch and replicate-logs to replicate the public tenant verifiable log data.
There has been no activity in any tenant for the default watch horizon (how far back we look for changes).
+
To set an explicit, and in this example very large, horizon try the following:
+
veracity watch --horizon 10000h
+
+
+
The watch command is used to determine the massifindex, even when you are only interested in a single tenant.
+You then provide that index to the replicate-logs command using the --massif option:
By default, all massifs up to and including the massif specified by --massif <N> are verified remotely and checked for consistency against the local replica (following the logical steps in the diagram above).
+
The numbered .log files are the verifiable data for your log.
+
The .sth files are
+COSE Sign1 binary format signed messages.
+Each .sth is associated with the identically numbered massif.
+The log root material in the .sth signature attests to the entire state of the log up to the end of the associated massif.
+The details of consuming the binary format of the seal and verifying the signature are beyond the scope of this article.
+
However, the implementation used by veracity can be found in the open source merkle log library maintained by DataTrails
+go-datatrails-merklelog
+
Takeaways
+
+
To be sure mistaken, or malicious, changes to DataTrails data stores can always be detected run this command about once a week:
+veracity --tenant $TENANT watch --horizon 180h | veracity replicate-logs --replicadir merklelogs
+
This process guarantees you can’t be misrepresented, any alternate version of events would be provably false.
+
To guarantee continued operation even if DataTrails is prevented from operating, a copy of the DataTrails metadata must be retained.
+
You can reasonably chose to trust DataTrails copy, because, even in the most extreme cases, it is “fail-safe” if DataTrails SaaS storage is compromised, when combined with a replicated verifiable merkle log.
+
+`}).add({id:22,href:"https://docs.datatrails.ai/platform/administration/dropbox-integration/",title:"Dropbox Integration",description:"Integrating with Dropbox",content:`
The Dropbox Integration
Connecting your DataTrails tenancy to your Dropbox account will allow you to automatically record and maintain the provenance metadata of your files in an immutable Audit Trail.
DataTrails uses transparent and auditable distributed ledger technology to maintain an immutable trail of provenance metadata independent of, but in concert with, the original file in Dropbox.
The original data never enters the DataTrails system and remains on Dropbox.
@@ -30674,7 +31382,7 @@ You would disconnect in Dropbox if you no longer wish to use DataTrails for prov
This is how to connect and disconnect DataTrails and Dropbox, it is that simple! Please see our
FAQ for more information.
-`}).add({id:22,href:"https://docs.datatrails.ai/platform/administration/compliance-policies/",title:"Compliance Policies",description:"Creating and Managing Compliance Policies",content:`
Creating a Compliance Policy
+`}).add({id:23,href:"https://docs.datatrails.ai/platform/administration/compliance-policies/",title:"Compliance Policies",description:"Creating and Managing Compliance Policies",content:`
Creating a Compliance Policy
Compliance Policies are user-defined rule sets that Assets can be tested against. Compliance Policies only need to be created once; all applicable Assets will be tested against that policy thereafter.
For example, a policy might assert that “Maintenance Alarm Events must be addressed by a Maintenance Report Event, recorded within 72 hours of the alarm”. This creates a Compliance Policy in the system which any Asset can be tested against as needed.
As compliance is ensured by a regular series of Events, an Audit Trail builds up over time that allows compliance to be checked for the entire lifetime of the Asset.
@@ -31031,7 +31739,7 @@ An example response for a non-compliant Asset
"next_page_token": "",
"compliant_at": "2024-01-17T10:16:12Z"}
-
`}).add({id:23,href:"https://docs.datatrails.ai/platform/administration/grouping-assets-by-location/",title:"Grouping Assets by Location",description:"Adding a Location",content:`
Locations associate an Asset with a ‘home’ that can help when governing sharing policies with OBAC and ABAC. Locations do not need pinpoint precision and can be named by site, building, or other logical grouping.
+
`}).add({id:24,href:"https://docs.datatrails.ai/platform/administration/grouping-assets-by-location/",title:"Grouping Assets by Location",description:"Adding a Location",content:`
Locations associate an Asset with a ‘home’ that can help when governing sharing policies with OBAC and ABAC. Locations do not need pinpoint precision and can be named by site, building, or other logical grouping.
It may be useful to indicate an Asset’s origin. For example, if tracking traveling consultant’s laptops, you may wish to associate them with a ‘home’ office.
Caution: It is important to recognize that the location does not necessarily denote the Asset’s current position in space; it simply determines which facility the Asset belongs to. For things that move around, use GIS coordinates on Events instead. See
@@ -31597,7 +32305,7 @@ For more information on creating Events, please visit
-`}).add({id:24,href:"https://docs.datatrails.ai/developers/api-reference/app-registrations-api/",title:"App Registrations API",description:"App Registrations API Reference",content:`
+`}).add({id:25,href:"https://docs.datatrails.ai/developers/api-reference/app-registrations-api/",title:"App Registrations API",description:"App Registrations API Reference",content:`
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
Postman collection, the
@@ -31966,6 +32674,16 @@ If you are looking for a simple way to test our API you might prefer our
Human-readable display name for this Application.
+
+
roles
+
array
+
+
+
+
+
+
+
@@ -32055,6 +32773,16 @@ If you are looking for a simple way to test our API you might prefer our
Resource name for the application
+
+
roles
+
array
+
+
+
+
+
+
+
tenant_id
string
@@ -32279,6 +33007,16 @@ If you are looking for a simple way to test our API you might prefer our
Resource name for the application
+
+
roles
+
array
+
+
+
+
+
+
+
tenant_id
string
@@ -32439,6 +33177,16 @@ If you are looking for a simple way to test our API you might prefer our
Resource name for the application
+
+
roles
+
array
+
+
+
+
+
+
+
tenant_id
string
@@ -32597,6 +33345,16 @@ If you are looking for a simple way to test our API you might prefer our
Resource name for the application
+
+
roles
+
array
+
+
+
+
+
+
+
tenant_id
string
@@ -32653,7 +33411,7 @@ If you are looking for a simple way to test our API you might prefer our
-`}).add({id:25,href:"https://docs.datatrails.ai/developers/api-reference/assets-api/",title:"Assets API",description:"Assets API Reference",content:`
+`}).add({id:26,href:"https://docs.datatrails.ai/developers/api-reference/assets-api/",title:"Assets API",description:"Assets API Reference",content:`
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
Postman collection, the
@@ -34703,7 +35461,7 @@ If you are looking for a simple way to test our API you might prefer our
-`}).add({id:26,href:"https://docs.datatrails.ai/developers/api-reference/attachments-api/",title:"Attachments API",description:"Attachments API Reference",content:`
+`}).add({id:27,href:"https://docs.datatrails.ai/developers/api-reference/attachments-api/",title:"Attachments API",description:"Attachments API Reference",content:`
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
Postman collection, the
@@ -35824,7 +36582,7 @@ If you are looking for a simple way to test our API you might prefer our
-`}).add({id:27,href:"https://docs.datatrails.ai/developers/api-reference/blobs-api/",title:"Blobs API",description:"Blobs API Reference",content:`
+`}).add({id:28,href:"https://docs.datatrails.ai/developers/api-reference/blobs-api/",title:"Blobs API",description:"Blobs API Reference",content:`
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
Postman collection, the
@@ -36371,7 +37129,7 @@ For information on Attachments and how to implement them, please refer to
-`}).add({id:28,href:"https://docs.datatrails.ai/developers/api-reference/compliance-api/",title:"Compliance API",description:"Compliance API Reference",content:`
+`}).add({id:29,href:"https://docs.datatrails.ai/developers/api-reference/compliance-api/",title:"Compliance API",description:"Compliance API Reference",content:`
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
Postman collection, the
@@ -37683,7 +38441,7 @@ If you are looking for a simple way to test our API you might prefer our
-`}).add({id:29,href:"https://docs.datatrails.ai/developers/api-reference/events-api/",title:"Events API",description:"Events API Reference",content:`
+`}).add({id:30,href:"https://docs.datatrails.ai/developers/api-reference/events-api/",title:"Events API",description:"Events API Reference",content:`
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
Postman collection, the
@@ -39809,7 +40567,7 @@ For example:
-`}).add({id:30,href:"https://docs.datatrails.ai/developers/api-reference/iam-policies-api/",title:"IAM Policies API",description:"IAM Policies API Reference",content:`
+`}).add({id:31,href:"https://docs.datatrails.ai/developers/api-reference/iam-policies-api/",title:"IAM Policies API",description:"IAM Policies API Reference",content:`
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
Postman collection, the
@@ -41502,7 +42260,7 @@ If you are looking for a simple way to test our API you might prefer our
-`}).add({id:31,href:"https://docs.datatrails.ai/developers/api-reference/iam-subjects-api/",title:"IAM Subjects API",description:"IAM Subjects API Reference",content:`
+`}).add({id:32,href:"https://docs.datatrails.ai/developers/api-reference/iam-subjects-api/",title:"IAM Subjects API",description:"IAM Subjects API Reference",content:`
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
Postman collection, the
@@ -42419,7 +43177,7 @@ If you are looking for a simple way to test our API you might prefer our
-`}).add({id:32,href:"https://docs.datatrails.ai/developers/developer-patterns/scitt-api/",title:"Quickstart: SCITT Statements (Preview)",description:"Getting Started with SCITT: creating a collection of statements (Preview)",content:`
+`}).add({id:33,href:"https://docs.datatrails.ai/developers/developer-patterns/scitt-api/",title:"Quickstart: SCITT Statements (Preview)",description:"Getting Started with SCITT: creating a collection of statements (Preview)",content:`
The SCITT API is currently in preview and subject to change
The Supply Chain Integrity, Transparency and Trust (SCITT) initiative is a set of
@@ -42549,7 +43307,7 @@ By using the content-type parameter, verifiers can filter to specific types, fil
-`}).add({id:33,href:"https://docs.datatrails.ai/developers/api-reference/locations-api/",title:"Locations API",description:"Locations API Reference",content:`
+`}).add({id:34,href:"https://docs.datatrails.ai/developers/api-reference/locations-api/",title:"Locations API",description:"Locations API Reference",content:`
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
Postman collection, the
@@ -43694,7 +44452,7 @@ If you are looking for a simple way to test our API you might prefer our
-`}).add({id:34,href:"https://docs.datatrails.ai/developers/api-reference/public-assets-api/",title:"Public Assets API",description:"Public Assets API Reference",content:`
+`}).add({id:35,href:"https://docs.datatrails.ai/developers/api-reference/public-assets-api/",title:"Public Assets API",description:"Public Assets API Reference",content:`
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
Postman collection, the
@@ -44536,7 +45294,7 @@ If you are looking for a simple way to test our API you might prefer our
-`}).add({id:35,href:"https://docs.datatrails.ai/developers/api-reference/tenancies-api/",title:"Tenancies API",description:"Tenancies API Reference",content:`
+`}).add({id:36,href:"https://docs.datatrails.ai/developers/api-reference/tenancies-api/",title:"Tenancies API",description:"Tenancies API Reference",content:`
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
Postman collection, the
@@ -45671,7 +46429,7 @@ If you are looking for a simple way to test our API you might prefer our
-`}).add({id:36,href:"https://docs.datatrails.ai/developers/yaml-reference/story-runner-components/",title:"YAML Runner Components",description:"Common Keys Used for the Yaml Runner",content:`
+`}).add({id:37,href:"https://docs.datatrails.ai/developers/yaml-reference/story-runner-components/",title:"YAML Runner Components",description:"Common Keys Used for the Yaml Runner",content:`
Note: To use the YAML Runner you will need to install the datatrails-archivist python package.
@@ -45733,7 +46491,7 @@ If you are looking for a simple way to test our API you might prefer our
--client-id <your-client-id> \\
--client-secret <your-client-secret> \\
<path-to-yaml-file>
-
`}).add({id:37,href:"https://docs.datatrails.ai/developers/yaml-reference/assets/",title:"Assets YAML Runner",description:"Asset Actions Used with the Yaml Runner",content:`
+`}).add({id:38,href:"https://docs.datatrails.ai/developers/yaml-reference/assets/",title:"Assets YAML Runner",description:"Asset Actions Used with the Yaml Runner",content:`
Note: To use the YAML Runner you will need to install the datatrails-archivist python package.
@@ -45855,7 +46613,7 @@ If this is not needed then do not wait for confirmation.
description:Wait for all Assets in the wipp namespace to be confirmedattrs:arc_namespace:wipp
-
`}).add({id:38,href:"https://docs.datatrails.ai/developers/yaml-reference/events/",title:"Events YAML Runner",description:"Event Actions Used with the Yaml Runner",content:`
+`}).add({id:39,href:"https://docs.datatrails.ai/developers/yaml-reference/events/",title:"Events YAML Runner",description:"Event Actions Used with the Yaml Runner",content:`
Note: To use the YAML Runner you will need to install the datatrails-archivist python package.
@@ -45957,7 +46715,7 @@ If this is not needed then do not wait for confirmation.
arc_display_type:openasset_attrs:arc_display_type:door
-
`}).add({id:39,href:"https://docs.datatrails.ai/developers/yaml-reference/locations/",title:"Locations YAML Runner",description:"Location Actions Used with the Yaml Runner",content:`
+`}).add({id:40,href:"https://docs.datatrails.ai/developers/yaml-reference/locations/",title:"Locations YAML Runner",description:"Location Actions Used with the Yaml Runner",content:`
Note: To use the YAML Runner you will need to install the datatrails-archivist python package.
@@ -46006,7 +46764,7 @@ If this is not needed then do not wait for confirmation.
print_response:trueattrs:director:John Smith
-
`}).add({id:40,href:"https://docs.datatrails.ai/developers/yaml-reference/subjects/",title:"Subjects YAML Runner",description:"Subject Actions Used with the Yaml Runner",content:`
+`}).add({id:41,href:"https://docs.datatrails.ai/developers/yaml-reference/subjects/",title:"Subjects YAML Runner",description:"Subject Actions Used with the Yaml Runner",content:`
Note: To use the YAML Runner you will need to install the datatrails-archivist python package.
@@ -46116,7 +46874,7 @@ If this is not needed then do not wait for confirmation.
print_response:truesubject_label:A subject\`\`
-
`}).add({id:41,href:"https://docs.datatrails.ai/developers/yaml-reference/compliance/",title:"Compliance Policies YAML Runner",description:"Compliance Policy Actions Used with the Yaml Runner",content:`
+`}).add({id:42,href:"https://docs.datatrails.ai/developers/yaml-reference/compliance/",title:"Compliance Policies YAML Runner",description:"Compliance Policy Actions Used with the Yaml Runner",content:`
Note: To use the YAML Runner you will need to install the datatrails-archivist python package.
@@ -46150,7 +46908,7 @@ If this is not needed then do not wait for confirmation.
description:Check Compliance of EV pump 1.report:trueasset_label:ev pump 1
-
`}).add({id:42,href:"https://docs.datatrails.ai/developers/yaml-reference/estate-info/",title:"Estate Information YAML Runner",description:"Retrieve Estate Info Using the Yaml Runner",content:`
+`}).add({id:43,href:"https://docs.datatrails.ai/developers/yaml-reference/estate-info/",title:"Estate Information YAML Runner",description:"Retrieve Estate Info Using the Yaml Runner",content:`
Note: To use the YAML Runner you will need to install the datatrails-archivist python package.
This sub-section of the Developers subject area contains more detailed information on topics that cannot be covered by the API or YAML Runner references.
@@ -46179,7 +46937,7 @@ If this is not needed then do not wait for confirmation.
Software Package Profile →
-`}).add({id:44,href:"https://docs.datatrails.ai/developers/api-reference/caps-api/",title:"Caps API",description:"Caps API Reference",content:`
+`}).add({id:45,href:"https://docs.datatrails.ai/developers/api-reference/caps-api/",title:"Caps API",description:"Caps API Reference",content:`
Note: This page is primarily intended for developers who will be writing applications that will use DataTrails for provenance.
If you are looking for a simple way to test our API you might prefer our
Postman collection, the
@@ -46297,7 +47055,7 @@ If you are looking for a simple way to test our API you might prefer our
If you are a developer who is looking to easily add provenance to their data, this section is for you.
@@ -46388,7 +47146,7 @@ If you are looking for a simple way to test our API you might prefer our
-`}).add({id:50,href:"https://docs.datatrails.ai/platform/",title:"Platform",description:"DataTrails Platform and configuration documentation",content:`
+`}).add({id:51,href:"https://docs.datatrails.ai/platform/",title:"Platform",description:"DataTrails Platform and configuration documentation",content:`
Platform
If you are new to DataTrails, this is the place to start.
\ No newline at end of file
diff --git a/platform/administration/dropbox-integration/index.html b/platform/administration/dropbox-integration/index.html
index 74557cf24..04b35dc71 100644
--- a/platform/administration/dropbox-integration/index.html
+++ b/platform/administration/dropbox-integration/index.html
@@ -1,4 +1,4 @@
-Dropbox Integration - DataTrails
+Dropbox Integration - DataTrails
DataTrails
@@ -15,4 +15,4 @@
Publish Event in the provenance metadata record for that file. The result is that the auditable provenance record for your files begins at the moment that you link a folder and that an immutable audit trail for each file automatically grows as the files are modified.
You are free, at any time, to link and unlink a folder at all levels of your folder tree using the instructions at
Editing the list of Linked folders
Note: During configuration, when you link a folder in the UI we will automatically link any subfolders too. Similarly, if you unlink a folder in the UI we will automatically unlink any subfolders.
If you create a subfolder in Dropbox after the integration has been set up it will be automatically added to the linked folder list. If you delete a subfolder or move it to an unlinked location it will be automatically removed from the linked folder list.
If a folder is unlinked for any reason, such by as direct configuration or by being moved, the Audit Trail will stop. Relinking the folder will restart the Audit Trail but we cannot recover any Events that happened while the folder was unlinked.
Note: DataTrails masks the file path and replaces the filename with the Asset ID in the public Asset view that is returned by Instaproof. This is intentional so that private information cannot be accidentally released via the Instaproof search results. Knowledge of the filename is not needed to prove provenance because Instaproof will attest and verify the content of a file even if the filename has been changed. The permissioned view that is seen by an administrator who is logged into a tenancy will show the file name and the file path.
Select Settings or Integrations from the side bar and then the Integrations tabSettings
Select Dropbox and then Proceed.Proceed
If you are already logged into Dropbox on the device that you are using to set up the integration then you will proceed directly to step 3. If you are not logged in then Dropbox will ask you to authenticate.Log in to Dropbox
DataTrails now asks for permission to see metadata for your files and folders. Click Allow to give DataTrails permission to access your Dropbox Folders.Select Allow
Select the Dropbox folder that you wish to link to DataTrails and then click Confirm. The contents of this folder and all its subfolders will be added to DataTrails as public Document Profile Assets.Select folder and Confirm
You will see a success message. Dropbox will be connected and the selected folders will be linked.Success!
Click on an icon on the right to edit the connection or to disconnect.Configuration icons on the right
Check the Asset Overview to see your Dropbox files.Assets
Remember: The filenames of the Dropbox files are masked using the format xxx…
Select the File icon in DataTrailsFile icon on the right
You will see the list of available folders. Select a folder to link or deselect a folder to unlink and then click ConfirmReconfigure folders and Confirm
To disconnect DataTrails and Dropbox you have the option to disconnect using both applications.
Select the Disconnect icon in DataTrailsDisconnect Dropbox
You will see a warning message.Disconnect Warning
This means that this specific tenancy will no longer be used for provenance. You would do this if you no longer want to use a connected tenancy while continuing to use other connected tenancies.
If you also want to disconnect in Dropbox, log in to Dropbox, select your account and then Settings followed by the Apps tab. Select DataTrails and then DisconnectDisconnect DataTrails
You would disconnect in Dropbox if you no longer wish to use DataTrails for provenance. This will remove access permissions for all your tenancies and should be done after you have disconnected all your individual tenancies in DataTrails.
This is how to connect and disconnect DataTrails and Dropbox, it is that simple! Please see our
-FAQ for more information.
\ No newline at end of file
diff --git a/platform/administration/grouping-assets-by-location/index.html b/platform/administration/grouping-assets-by-location/index.html
index 37d8e92aa..c6d36078d 100644
--- a/platform/administration/grouping-assets-by-location/index.html
+++ b/platform/administration/grouping-assets-by-location/index.html
@@ -1,4 +1,4 @@
-Grouping Assets by Location - DataTrails
+Grouping Assets by Location - DataTrails
\ No newline at end of file
diff --git a/platform/administration/identity-and-access-management/index.html b/platform/administration/identity-and-access-management/index.html
index ea8365546..1eecdf6fc 100644
--- a/platform/administration/identity-and-access-management/index.html
+++ b/platform/administration/identity-and-access-management/index.html
@@ -1,4 +1,4 @@
-Identity and Access Management - DataTrails
+Identity and Access Management - DataTrails
Navigate to Settings on the sidebar and select Tenancy. Enter your SSO configuration, then select SAVE ENTERPRISE SSO CONFIG. Saving your configuration may take a moment.Configure SSO
NOTE: To retrieve the necessary data for the configuration form, your IDP must be configured to be compatible with DataTrails. Enter the information below.
\ No newline at end of file
diff --git a/platform/administration/sharing-access-inside-your-tenant/index.html b/platform/administration/sharing-access-inside-your-tenant/index.html
index 5c3375cbd..c5901788b 100644
--- a/platform/administration/sharing-access-inside-your-tenant/index.html
+++ b/platform/administration/sharing-access-inside-your-tenant/index.html
@@ -1,4 +1,4 @@
-Managing Internal Access to Your Tenant - DataTrails
+Managing Internal Access to Your Tenant - DataTrails
\ No newline at end of file
diff --git a/platform/administration/sharing-access-outside-your-tenant/index.html b/platform/administration/sharing-access-outside-your-tenant/index.html
index 1762b2ff0..d0e97d14d 100644
--- a/platform/administration/sharing-access-outside-your-tenant/index.html
+++ b/platform/administration/sharing-access-outside-your-tenant/index.html
@@ -1,4 +1,4 @@
-Managing External Access to Your Tenant - DataTrails
+Managing External Access to Your Tenant - DataTrails
Once complete, check the Asset is shared appropriately; Mandy should only be able to see the Name, Type and an Image of the container as well as the Asset’s custom weight and length attributes.Mandy's view as an Administrator of the External Organization
By comparison, our Administrator, Jill, can see the full details of the Asset:Jill's view as an Administrator
If Mandy wishes to share what she can to Non-Administrators within her organization, it is her responsibility to create an ABAC Policy as she would any other Asset she has access to.
There are many possible fine-grained controls and as such ABAC and OBAC Policy Creation is an extensive topic. To find out more, head over to the
-IAM Policies API Reference.
\ No newline at end of file
diff --git a/platform/administration/verified-domain/index.html b/platform/administration/verified-domain/index.html
index ec7826f56..8a1dae6af 100644
--- a/platform/administration/verified-domain/index.html
+++ b/platform/administration/verified-domain/index.html
@@ -1,4 +1,4 @@
-Verified Domain - DataTrails
+Verified Domain - DataTrails
DataTrails
@@ -11,4 +11,4 @@
Tenant Display Name. Tenant display names are internal, appearing only within your own Tenancy, and are not visible to anyone you share with. A verified domain name must be set by the DataTrails team, and will be visible to actors outside your Tenancy.
Why is it important to verify my organization’s domain?#
Getting your organization’s domain verified indicates that you are who you say you are. This helps close the trust gap inherent to information sharing between organizations or with the public.
Without domain verification, the Organization is noted as the publisher’s Tenant ID. Verifying your domain not only shows that this information comes from a legitimate actor on behalf of the organization, but also replaces the Tenant ID with your domain name so consumers can more easily identify the publishing organization. For example, someone attesting information on behalf of DataTrails would have datatrails.ai.
Organization without Verified Domain
Organization with Verified Domain
Note: You do not see the badge if you are logged into DataTrails.
The DataTrails team is happy to help you obtain your verified domain badge. Please contact support@datatrails.ai from an email address which includes the domain you wish to verify. For example, email us from @datatrails.ai to verify the datatrails.ai domain. We will send you a confirmation email to make sure that the details are correct.
In order to protect our user community, it is important for us to verify that the person making the request is authorized to do so by the owner of the domain. We will carry out some internal checks based on the information that we have been given and we may request further evidence from you to prove that you own or control the domain in question. Typically, this will be in the form of public company information or domain registration records. Please be prepared to share this evidence with us.
Checking the Verified Domain of an External Organization#
If an organization has a verified domain with DataTrails, it will be displayed when you view a Public Asset they have published. You may also retrieve this information via the API if you know the organization’s Tenant ID.
curl -v -X GET \
-H "@$HOME/.datatrails/bearer-token.txt"\
https://app.datatrails.ai/archivist/v1/tenancies/{uuid}:publicinfo
-
If you are new to DataTrails, this is the place to start.
The foundations of understanding the DataTrails platform are explained in the Overview. This will introduce the basic (and not so basic) concepts and take you through creating your first Asset and registering the first Event of your audit trail.
The Administration section will show you how to manage your Tenancy and control access to your Assets.
Check out the sub-sections below for more information!
If you are new to DataTrails, this is the place to start.
The foundations of understanding the DataTrails platform are explained in the Overview. This will introduce the basic (and not so basic) concepts and take you through creating your first Asset and registering the first Event of your audit trail.
The Administration section will show you how to manage your Tenancy and control access to your Assets.
Check out the sub-sections below for more information!
Note: Creation and editing of Compliance Policies is only supported through the API.
Trust is subjective. Compliance is a judgement call. No matter what security technology you have in play, every trust decision you make will depend on the circumstances: who is accessing what; where they’re coming from; how sensitive an operation they’re attempting; the consequences of getting it wrong. An Asset that is safe in one context may not be in another.
By maintaining a complete traceable record of Who Did What When to a Thing, DataTrails makes it possible for any authorized stakeholder to quickly and easily verify that critical processes have been followed and recorded correctly. And if they weren’t, the record makes it easy to discover where things went wrong and what to fix. For instance, missed or late maintenance rounds can be detected simply by spotting gaps in the maintenance record; cyber vulnerable devices can be found by comparing ideal baselines with patching records; out-of-order process execution and handling violations are visible to all; and back-dating is automatically detectable.
All of this is very valuable in audit and RCA situations after an incident, where there is time to collect together Asset records, piece together the important parts, and analyze the meaning.
But what if the same information could be used for real-time decision-making that might avert an incident? This is where DataTrails’ “compliance posture” APIs come in. These take the thinking and processing burden off the client by providing a single, simple API call to answer the complex question: “given all you know about this asset, should I trust it right now?”. Additionally, and crucially for sensitive use cases, the yes or no answer comes with a detailed defensible reason why which can be inspected by relevant stakeholders during or after the event.
When put all together, this enables high quality decision making based on the best available data, even giving confidence to automated or AI systems to play a full part in operations. Assets can be checked as part of access control logic, prior to accepting data or commands from them, accepting a shipment, or anything else that is important to your business. Crucially, each stakeholder is able to define their own view on Compliance, meaning they can each apply their own unique lens and business concerns to the same evidence base.
In order to make these trust decisions, DataTrails can be configured with Compliance Policies to check Assets against. These policies specify things like tolerance for vulnerability windows, maintenance SLAs, or detecting unusual values for attributes. For example:
“Assets must be patched within 40 days of vulnerability notification”
“Maintenance calls must be answered within 72 hours”
“rad level must be less than 7”
Policies can also declare relative tolerances, such as:
“No shipping transfer should be more than 10% longer than the average time”
“The reported weight of this container should be within 1 standard deviation of the historic mean”
Individual assets either pass or fail, and organizations can calculate their overall security/compliance posture based on what proportion of their assets are breaching their policy set. Compliance signals can also be used to identify where risk lies in an organization and help to prioritize remedial activities.
As with Assets and Events, Compliance Policies are very flexible and can be configured to answer a wide range of business problems. The following categories of policy are supported:
COMPLIANCE RICHNESS: This Compliance Policy checks whether a specific attribute of an Asset is within acceptable bounds. For example, “Weight attribute must be less than 1000 kg”
COMPLIANCE SINCE: This Compliance Policy checks if the time since the last occurrence of a specific Event Type has elapsed a specified threshold. For example, “Time since last Maintenance must be less than 72 hours”
COMPLIANCE CURRENT OUTSTANDING: This Compliance Policy will only pass if there is an associated closing event addressing a specified outstanding event. For example, checking there are no outstanding “Maintenance Request” Events that are not addressed by an associated “Maintenance Performed” Event.
COMPLIANCE_PERIOD_OUTSTANDING: This Compliance Policy will only pass if the time between a pair of correlated events did not exceed the defined threshold. For example, a policy checking that the time between “Maintenance Request” and “Maintenance Performed” Events does not exceed the maximum 72 hours.
COMPLIANCE_DYNAMIC_TOLERANCE: This Compliance Policy will only pass if the time between a pair of correlated events or the value of an attribute does not exceed the a variability from the usually observed values. For example, a policy checking that maintenance times are not considerably longer than normal, or the weight of a container is not much less than the typical average.
Note: To correlate Events, define the attribute arc_correlation_value in the Event attributes and set it to the same value on each pair of Events that are to be associated.
In the Asset example above there is an at_time property, which reflects a date and time at which these attributes and values were contemporary. Usually this will just be the current system time, but with DataTrails it is possible to go back in time and ask the question “what would that asset have looked like to me had I looked at it last week/last year/before the incident?”. Using its high integrity record of Asset lineage, DataTrails can give clear and faithful answers to those questions with no fear of backdating, forgery, or repudiation getting in the way.
To do this, simply add at_time=TIMESTAMP to your query. For example, to check the state an Asset was in at 15:30 UTC on 23rd June:
Compliance calls can be similarly modified to answer questions like “had I asked this question at the time, what would the answer have been?” or “had the AI asked this question, would it have made a better decision?”. This can be done by adding a compliant_at timestamp to the compliance request.
\ No newline at end of file
diff --git a/platform/overview/core-concepts/index.html b/platform/overview/core-concepts/index.html
index ba727a7a7..a5ea536a9 100644
--- a/platform/overview/core-concepts/index.html
+++ b/platform/overview/core-concepts/index.html
@@ -1,4 +1,4 @@
-Core Concepts - DataTrails
+Core Concepts - DataTrails
DataTrails
@@ -19,4 +19,4 @@
Public View which is visible to everyone.
The purpose of this view is to allow anyone to verify that the document that they are using is genuine and has not been altered. When the document Audit Trail is combined with
Instaproof a user of your data can easily find out which version of a document they have and confirm that it is genuine.
Using the four concepts of Tenancy, Assets, Events and Access Policies it is possible to create a Golden Thread of evidence makes up the Data Trails Audit Trail.
-This has many use cases relating to content authenticity but can also be applied to supply chain integrity and standards compliance, or fact anything where stakeholders need transparency and trust.
\ No newline at end of file
+This has many use cases relating to content authenticity but can also be applied to supply chain integrity and standards compliance, or fact anything where stakeholders need transparency and trust.The Golden Thread
\ No newline at end of file
diff --git a/platform/overview/creating-an-asset/index.html b/platform/overview/creating-an-asset/index.html
index bb0127104..9b40588a6 100644
--- a/platform/overview/creating-an-asset/index.html
+++ b/platform/overview/creating-an-asset/index.html
@@ -1,4 +1,4 @@
-Creating an Asset - DataTrails
+Creating an Asset - DataTrails
Here we see all details entered: The extended attributes and a history of Events recorded on the Asset.
Note: After registration, Assets cannot be updated using the asset creation screens but an Asset’s Asset Attributes can be updated as part of an Event.
For more information on creating Events,
-click here.
The first Event will always be the Asset Creation. In the next section, we will cover how to create your own Events for your Asset.
\ No newline at end of file
diff --git a/platform/overview/creating-an-event-against-an-asset/index.html b/platform/overview/creating-an-event-against-an-asset/index.html
index 13cffead8..cd0c141e6 100644
--- a/platform/overview/creating-an-event-against-an-asset/index.html
+++ b/platform/overview/creating-an-event-against-an-asset/index.html
@@ -1,4 +1,4 @@
-Creating an Event Against an Asset - DataTrails
+Creating an Event Against an Asset - DataTrails
Using the sidebar, select Instaproofand then drag a document into the search areaInstaproof Search Area
Document not found If the document that you are verifying has not been found, you will see a red response banner.Document Not Found
The possible reasons for this outcome are:
The document owner has not registered the document in their DataTrails tenancy
The document owner has not published this version of the document as an event
The document has been modified since it was registered with DataTrails
In all cases you should contact the document owner to find out whether your document version can be trusted.
Document Found
Note: In this screenshot we are using the file greenfrog.jpg which can be downloaded from our
Instaproof Samples page.
If the document has been registered with DataTrails, you will see a green response banner together with a list of all the matching Document Profile Assets. This means that the version of the document that you have has a verifiable provenance record and an immutable audit trail.Document Found
At the top of the image you can see the document that was checked and found on Instaproof.
Note: We don’t need to access your document to find its provenance, everything that you see in the Instaproof results is held locally and was recorded by the document owner when the document was registered or events were recorded.
You can check additional documents by dragging them on top of this area.
Some of the results may be from verified organizations and others from unverified members of the DataTrails community. All results contribute something to the provenance and life history of this document.
A Verified Organization has a
verified domain associated with their DataTrails account. This helps to confirm the identity of the document source and is likely the thing to look for if you want ‘official’ provenance records. A Verified Domain can be used to link an identity (such as a company or a brand name) to a DataTrails Tenancy.
The Other Results results are those from from unverified DataTrails accounts - other members of the DataTrails community who have made claims or observations about the document you’re interested in.
While they may seem less ‘official’ than verified account results, they may still be useful to you. The identity of all users making attestations in DataTrails is checked, recorded, and immutable, even if they are not (yet) associated with a verified domain name.
Click on a result to see details of the document history. You will see the Event details of the version that matches your document on the right with a partial view of the Asset details for the latest version on the left. Close the Event details to see the full Asset details view.
Asset Details Tab
The Asset details tab shows the information about the asset attributes.
-Includes the current version, the organization, and Verified Domain badge, if applicable.
Public attestation and visibility - Public means that the document is publicly accessible using the public URL. Permissioned means that it is private and requires shared access to be enabled for a user to be able to view it.
Type - For Document Profile Assets this will always be ‘Document’.
Description - an optional description of the Asset
Attributes - This drop down section contains any custom attributes that were added to the asset.
Versions - the published versions of the document
Note: The share button allows you to access and copy the permissioned and public (if enabled) links for the asset to share with other users. Private links are for logged in users with permissions assigned in an Access Policy, Public links are for everyone.
Share Links
The Event History tab shows the full history of Events including custom Events, new Versions and Withdraw Events.
Click on the tab and select an Event to view the details.
Event History Overview Tab
The Overview information about the Event
Event Identity - The Event ID will always be of the format ‘publicassets/<asset_id>/events/<event_id>’ for public assets or ‘assets/<asset_id>/events/<event_id>’ for private assets.
Asset Identity - the ID of the parent Asset for this Event.
Transaction - This link contains the details of the Event transaction.Transaction Details
Type - For Document Profile Events this will always be ‘Publish’
Document changes - The version and document hash for new version Events. There is no data here for custom Events.
The Event attributes and Asset attributes tabs contain information about any custom attributes that were added or modified as part this Event.
\ No newline at end of file
+Includes the current version, the organization, and Verified Domain badge, if applicable.
Public attestation and visibility - Public means that the document is publicly accessible using the public URL. Permissioned means that it is private and requires shared access to be enabled for a user to be able to view it.
Type - For Document Profile Assets this will always be ‘Document’.
Description - an optional description of the Asset
Attributes - This drop down section contains any custom attributes that were added to the asset.
Versions - the published versions of the document
Note: The share button allows you to access and copy the permissioned and public (if enabled) links for the asset to share with other users. Private links are for logged in users with permissions assigned in an Access Policy, Public links are for everyone.
Share Links
The Event History tab shows the full history of Events including custom Events, new Versions and Withdraw Events.
Click on the tab and select an Event to view the details.
Event History Overview Tab
The Overview information about the Event
Event Identity - The Event ID will always be of the format ‘publicassets/<asset_id>/events/<event_id>’ for public assets or ‘assets/<asset_id>/events/<event_id>’ for private assets.
Asset Identity - the ID of the parent Asset for this Event.
Transaction - This link contains the details of the Event transaction.Transaction Details
Type - For Document Profile Events this will always be ‘Publish’
Document changes - The version and document hash for new version Events. There is no data here for custom Events.
The Event attributes and Asset attributes tabs contain information about any custom attributes that were added or modified as part this Event.
DataTrails provides Provenance as a Service that continuously proves Who Did What When to all data types.
DataTrails enables enterprises to build trust in data such as documents, images and sound files by ensuring that you know the origin and history of the data that you are using.
-This can also be applied to multi-party Assets such as software and physical items allowing you to make sure that processes are fit for purpose to comply with IT controls, corporate policies, and government regulations.
DataTrails permanently records evidence into an Immutable Audit Trail to bring the right level of trust in data for faster, confident decisions with lower business risk by combining:
Metadata Governance - Empower the right people in organizations to set, enforce, and execute complex data sharing policies.
Authenticated Provenance - Deliver full traceability on all internal and external data sources to speed and assure digital decisions.
Continuous Accountability - Instantly auditable evidence “Proves Who Did What When” for any shared Asset to delight your GRC team.
Persistent Integrity - Create a complete, unbroken, and permanent record of shared Event transactions, delivering continuous assurance for faster digital decisions.
DataTrails delivers assured metadata in a single line of code in a way that makes recording and auditing the full lifecycle of a piece of data simple. Any authorized participant (including a user, a software agent or an endpoint device) can register the Events that they are involved in. Users of the data can see a full picture of the data’s origin and history and by understanding Who Did What When, human actors and software/AI systems can make stronger real-time judgments about the trustworthiness of your data.DataTrails Functionality
\ No newline at end of file
+This can also be applied to multi-party Assets such as software and physical items allowing you to make sure that processes are fit for purpose to comply with IT controls, corporate policies, and government regulations.
DataTrails permanently records evidence into an Immutable Audit Trail to bring the right level of trust in data for faster, confident decisions with lower business risk by combining:
Metadata Governance - Empower the right people in organizations to set, enforce, and execute complex data sharing policies.
Authenticated Provenance - Deliver full traceability on all internal and external data sources to speed and assure digital decisions.
Continuous Accountability - Instantly auditable evidence “Proves Who Did What When” for any shared Asset to delight your GRC team.
Persistent Integrity - Create a complete, unbroken, and permanent record of shared Event transactions, delivering continuous assurance for faster digital decisions.
DataTrails delivers assured metadata in a single line of code in a way that makes recording and auditing the full lifecycle of a piece of data simple. Any authorized participant (including a user, a software agent or an endpoint device) can register the Events that they are involved in. Users of the data can see a full picture of the data’s origin and history and by understanding Who Did What When, human actors and software/AI systems can make stronger real-time judgments about the trustworthiness of your data.DataTrails Functionality
Here we see all details entered: The extended attributes and a history of Events recorded on the Document.
Note: To update the details of your Asset after it has been created, you must create an Event containing Asset Attributes that conform to the
Document Profile.
For more information on creating Events,
-click here.
The first Event in the Event History will always be the Document Registration. In the next section, we will cover how to create your own Events for your Document.
The first Event in the Event History will always be the Document Registration. In the next section, we will cover how to create your own Events for your Document.
\ No newline at end of file
diff --git a/platform/overview/registering-an-event-against-a-document-profile-asset/index.html b/platform/overview/registering-an-event-against-a-document-profile-asset/index.html
index 159785a02..6cc5fb837 100644
--- a/platform/overview/registering-an-event-against-a-document-profile-asset/index.html
+++ b/platform/overview/registering-an-event-against-a-document-profile-asset/index.html
@@ -1,4 +1,4 @@
-Registering an Event Against a Document Profile Asset - DataTrails
+Registering an Event Against a Document Profile Asset - DataTrails
\ No newline at end of file
diff --git a/sales/contactus/index.html b/sales/contactus/index.html
index 6f52a6d56..92c165a50 100644
--- a/sales/contactus/index.html
+++ b/sales/contactus/index.html
@@ -1,8 +1,8 @@
-Contact Us - DataTrails
+Contact Us - DataTrails
\ No newline at end of file
diff --git a/sitemap.xml b/sitemap.xml
index 85b02dd9e..7cca7e0a5 100644
--- a/sitemap.xml
+++ b/sitemap.xml
@@ -1 +1 @@
-/usecases/responsible-ai/2024-03-14T11:33:27+00:00weekly0.5/platform/overview/introduction/2021-06-14T10:57:58+01:00weekly0.5/developers/developer-patterns/getting-access-tokens-using-app-registrations/2023-09-27T11:12:25+01:00weekly0.5/platform/overview/core-concepts/2021-06-14T10:57:58+01:00weekly0.5/usecases/authenticity-media-files/2021-05-31T15:18:01+01:00weekly0.5/usecases/sc-state-machine/2024-03-26T14:03:01+00:00weekly0.5/platform/overview/advanced-concepts/2024-03-19T10:57:58+01:00weekly0.5/developers/developer-patterns/containers-as-assets/2021-05-31T15:18:01+01:00weekly0.5/usecases/sc-asset-lifecycle/2024-03-26T14:02:53+00:00weekly0.5/developers/developer-patterns/namespace/2021-05-31T15:18:01+01:00weekly0.5/platform/overview/creating-an-asset/2021-05-18T14:52:25+01:00weekly0.5/usecases/sc-chain-of-custody/2024-03-26T14:03:19+00:00weekly0.5/platform/overview/creating-an-event-against-an-asset/2021-05-18T15:32:01+01:00weekly0.5/platform/overview/registering-a-document-profile-asset/2023-06-29T15:11:03+01:00weekly0.5/developers/developer-patterns/document-profile/2021-05-31T15:18:01+01:00weekly0.5/platform/overview/registering-an-event-against-a-document-profile-asset/2023-07-26T13:07:55+01:00weekly0.5/developers/developer-patterns/software-package-profile/2023-06-26T11:56:01+01:00weekly0.5/platform/overview/instaproof/2023-07-18T12:10:19+01:00weekly0.5/developers/developer-patterns/veracity/2024-08-22T19:35:35+01:00weekly0.5/platform/overview/public-attestation/2021-05-18T14:52:25+01:00weekly0.5/usecases/bill-of-materials/2021-05-31T15:18:01+01:00weekly0.5/developers/developer-patterns/navigating-merklelogs/weekly0.5/platform/administration/identity-and-access-management/2021-06-14T10:57:58+01:00weekly0.5/developers/developer-patterns/massif-blob-offset-tables/weekly0.5/platform/administration/verified-domain/2021-05-18T14:52:25+01:00weekly0.5/platform/administration/sharing-access-inside-your-tenant/2021-05-18T15:33:03+01:00weekly0.5/platform/administration/sharing-access-outside-your-tenant/2021-05-18T15:33:31+01:00weekly0.5/platform/administration/dropbox-integration/2023-09-15T13:18:42+01:00weekly0.5/platform/administration/compliance-policies/2021-05-18T14:52:25+01:00weekly0.5/platform/administration/grouping-assets-by-location/2021-05-18T15:32:27+01:00weekly0.5/glossary/common-datatrails-terms/2022-10-19T07:39:44-07:00weekly0.5/glossary/reserved-attributes/2022-10-19T07:39:44-07:00weekly0.5/developers/api-reference/app-registrations-api/2021-06-09T11:39:03+01:00weekly0.5/developers/api-reference/assets-api/2021-06-09T11:39:03+01:00weekly0.5/developers/api-reference/attachments-api/2021-06-09T12:05:02+01:00weekly0.5/developers/api-reference/blobs-api/2021-06-09T13:32:57+01:00weekly0.5/developers/api-reference/compliance-api/2021-06-09T12:07:13+01:00weekly0.5/developers/api-reference/events-api/2021-06-09T11:48:40+01:00weekly0.5/developers/api-reference/iam-policies-api/2021-06-09T12:02:15+01:00weekly0.5/developers/api-reference/iam-subjects-api/2021-06-09T12:02:15+01:00weekly0.5/developers/developer-patterns/scitt-api/2021-06-09T13:49:35+01:00weekly0.5/developers/api-reference/locations-api/2021-06-09T11:56:23+01:00weekly0.5/developers/api-reference/public-assets-api/2021-06-09T11:56:23+01:00weekly0.5/developers/api-reference/tenancies-api/2021-06-09T13:29:57+01:00weekly0.5/developers/yaml-reference/story-runner-components/2021-06-09T11:39:03+01:00weekly0.5/developers/yaml-reference/assets/2021-06-09T11:39:03+01:00weekly0.5/developers/yaml-reference/events/2021-06-09T11:39:03+01:00weekly0.5/developers/yaml-reference/locations/2021-06-09T11:39:03+01:00weekly0.5/developers/yaml-reference/subjects/2021-06-09T11:39:03+01:00weekly0.5/developers/yaml-reference/compliance/2021-06-09T11:39:03+01:00weekly0.5/developers/yaml-reference/estate-info/2021-06-09T11:39:03+01:00weekly0.5/developers/developer-patterns/2023-05-31T10:14:18+01:00weekly0.5/developers/api-reference/caps-api/2024-03-05T11:30:29+00:00weekly0.5/platform/administration/2023-06-01T10:14:18+01:00weekly0.5/developers/yaml-reference/2023-05-31T10:14:18+01:00weekly0.5/glossary/2021-06-09T10:19:37+01:00weekly0.5/usecases/2021-05-20T17:42:10+01:00weekly0.5/developers/api-reference/2021-06-09T10:19:37+01:00weekly0.5/platform/overview/2021-05-20T12:03:27+01:00weekly0.5/developers/2020-10-06T08:48:23+00:00weekly0.5/platform/2020-10-06T08:48:23+00:00weekly0.5/2020-10-06T08:47:36+00:00weekly0.5/contributors/weekly0.5
\ No newline at end of file
+/usecases/responsible-ai/2024-03-14T11:33:27+00:00weekly0.5/platform/overview/introduction/2021-06-14T10:57:58+01:00weekly0.5/developers/developer-patterns/getting-access-tokens-using-app-registrations/2023-09-27T11:12:25+01:00weekly0.5/platform/overview/core-concepts/2021-06-14T10:57:58+01:00weekly0.5/usecases/authenticity-media-files/2021-05-31T15:18:01+01:00weekly0.5/usecases/sc-state-machine/2024-03-26T14:03:01+00:00weekly0.5/platform/overview/advanced-concepts/2024-03-19T10:57:58+01:00weekly0.5/developers/developer-patterns/containers-as-assets/2021-05-31T15:18:01+01:00weekly0.5/usecases/sc-asset-lifecycle/2024-03-26T14:02:53+00:00weekly0.5/developers/developer-patterns/namespace/2021-05-31T15:18:01+01:00weekly0.5/platform/overview/creating-an-asset/2021-05-18T14:52:25+01:00weekly0.5/usecases/sc-chain-of-custody/2024-03-26T14:03:19+00:00weekly0.5/platform/overview/creating-an-event-against-an-asset/2021-05-18T15:32:01+01:00weekly0.5/platform/overview/registering-a-document-profile-asset/2023-06-29T15:11:03+01:00weekly0.5/developers/developer-patterns/document-profile/2021-05-31T15:18:01+01:00weekly0.5/platform/overview/registering-an-event-against-a-document-profile-asset/2023-07-26T13:07:55+01:00weekly0.5/developers/developer-patterns/software-package-profile/2023-06-26T11:56:01+01:00weekly0.5/platform/overview/instaproof/2023-07-18T12:10:19+01:00weekly0.5/developers/developer-patterns/veracity/2024-08-22T19:35:35+01:00weekly0.5/platform/overview/public-attestation/2021-05-18T14:52:25+01:00weekly0.5/usecases/bill-of-materials/2021-05-31T15:18:01+01:00weekly0.5/developers/developer-patterns/navigating-merklelogs/weekly0.5/platform/administration/identity-and-access-management/2021-06-14T10:57:58+01:00weekly0.5/developers/developer-patterns/massif-blob-offset-tables/weekly0.5/platform/administration/verified-domain/2021-05-18T14:52:25+01:00weekly0.5/platform/administration/sharing-access-inside-your-tenant/2021-05-18T15:33:03+01:00weekly0.5/platform/administration/sharing-access-outside-your-tenant/2021-05-18T15:33:31+01:00weekly0.5/developers/developer-patterns/3rdparty-verification/2024-08-22T19:35:35+01:00weekly0.5/platform/administration/dropbox-integration/2023-09-15T13:18:42+01:00weekly0.5/platform/administration/compliance-policies/2021-05-18T14:52:25+01:00weekly0.5/platform/administration/grouping-assets-by-location/2021-05-18T15:32:27+01:00weekly0.5/glossary/common-datatrails-terms/2022-10-19T07:39:44-07:00weekly0.5/glossary/reserved-attributes/2022-10-19T07:39:44-07:00weekly0.5/developers/api-reference/app-registrations-api/2021-06-09T11:39:03+01:00weekly0.5/developers/api-reference/assets-api/2021-06-09T11:39:03+01:00weekly0.5/developers/api-reference/attachments-api/2021-06-09T12:05:02+01:00weekly0.5/developers/api-reference/blobs-api/2021-06-09T13:32:57+01:00weekly0.5/developers/api-reference/compliance-api/2021-06-09T12:07:13+01:00weekly0.5/developers/api-reference/events-api/2021-06-09T11:48:40+01:00weekly0.5/developers/api-reference/iam-policies-api/2021-06-09T12:02:15+01:00weekly0.5/developers/api-reference/iam-subjects-api/2021-06-09T12:02:15+01:00weekly0.5/developers/developer-patterns/scitt-api/2021-06-09T13:49:35+01:00weekly0.5/developers/api-reference/locations-api/2021-06-09T11:56:23+01:00weekly0.5/developers/api-reference/public-assets-api/2021-06-09T11:56:23+01:00weekly0.5/developers/api-reference/tenancies-api/2021-06-09T13:29:57+01:00weekly0.5/developers/yaml-reference/story-runner-components/2021-06-09T11:39:03+01:00weekly0.5/developers/yaml-reference/assets/2021-06-09T11:39:03+01:00weekly0.5/developers/yaml-reference/events/2021-06-09T11:39:03+01:00weekly0.5/developers/yaml-reference/locations/2021-06-09T11:39:03+01:00weekly0.5/developers/yaml-reference/subjects/2021-06-09T11:39:03+01:00weekly0.5/developers/yaml-reference/compliance/2021-06-09T11:39:03+01:00weekly0.5/developers/yaml-reference/estate-info/2021-06-09T11:39:03+01:00weekly0.5/developers/developer-patterns/2023-05-31T10:14:18+01:00weekly0.5/developers/api-reference/caps-api/2024-03-05T11:30:29+00:00weekly0.5/platform/administration/2023-06-01T10:14:18+01:00weekly0.5/developers/yaml-reference/2023-05-31T10:14:18+01:00weekly0.5/glossary/2021-06-09T10:19:37+01:00weekly0.5/usecases/2021-05-20T17:42:10+01:00weekly0.5/developers/api-reference/2021-06-09T10:19:37+01:00weekly0.5/platform/overview/2021-05-20T12:03:27+01:00weekly0.5/developers/2020-10-06T08:48:23+00:00weekly0.5/platform/2020-10-06T08:48:23+00:00weekly0.5/2020-10-06T08:47:36+00:00weekly0.5/contributors/weekly0.5
\ No newline at end of file
diff --git a/support/contactus/index.html b/support/contactus/index.html
index 2617c486a..135c07720 100644
--- a/support/contactus/index.html
+++ b/support/contactus/index.html
@@ -1,8 +1,8 @@
-Contact Us - DataTrails
+Contact Us - DataTrails
\ No newline at end of file
diff --git a/usecases/authenticity-media-files/index.html b/usecases/authenticity-media-files/index.html
index a4f9b3041..65558e380 100644
--- a/usecases/authenticity-media-files/index.html
+++ b/usecases/authenticity-media-files/index.html
@@ -1,4 +1,4 @@
-Authenticity of Media and Files - DataTrails
+Authenticity of Media and Files - DataTrails
A very simple yet powerful pattern for using DataTrails is the Authenticity pattern. This is a good choice when dealing with data or documents where trust, integrity and authenticity are more important than secrecy. This could be data that is shared between business partners or more simply the relationship between creators and consumers of digital media.
The DataTrails platform separates data from its provenance metadata. By recording the metadata in the DataTrails platform it becomes an irrefutable record of the origin, provenance, integrity and authenticity of the media asset. When the data is updated a corresponding Event updates the metadata in DataTrails to build an immutable audit trail of the history of that data.
Together with fine-grained attribute based access controls the platform provides a trust and visibility layer to support trusted data sharing and provides evidence to resolve contested scenarios.
Both private and public stakeholders can verify that what they see on their screen is authentic and and has not been tampered with.
The obvious example of a piece of digital media is a photographic image but it equally applies to graphical images and also sound and video recordings.
A provenance history helps to establish the authenticity and integrity of digital media content. It allows users to verify that the content that they are consuming or sharing is genuine and has not been tampered with or manipulated. In an era of declining trust in digital media caused by an increased awareness of misinformation, AI, and deepfakes, understanding the provenance of digital media is crucial for restoring trust and credibility.
Digital media provenance ensures transparency, trustworthiness, and accountability benefiting both content creators and consumers.
Media Origin:
The provenance record helps with attributing credit to the original creators of digital media. It enables content creators to protect their intellectual property rights and ensures they receive appropriate recognition for their work.
Consumers of the media can check the origin and history of the media to give confidence that the media is authentic and if it has been processed.
Versions:
Changes are recorded as Events. The immutable audit trail provided by DataTrails records the history of the media allowing users to verify that it contains no unofficial changes.
There are a great many documents that serve as evidence in formal discussions: shipping manifests; pictures of a traffic accident; statements of account; education diplomas; contracts. DataTrails adds strong integrity to any document to allow easy verification.
It is rare for a document to remain unchanged during it’s lifetime. Some documents are expected to go though many versions while others change much less frequently.
The
-Document Profile pattern is a suggested set of attributes for Assets and Events for recording the life cycle of a document.
Track Documents: Create a very simple Asset structure with minimal attributes to identify the document and additional attributes to store the key metadata, such as a hash of the document.
Collections: If the document is strongly related to another one, consider adding and tracking them all as Events against a single Asset record.
Versions: If the document is a new version of something already stored in DataTrails, then use Events to replace the document’s metadata with the updated version. Any authorized stakeholder fetching the Asset record will automatically get the most up-to-date version, and prior versions can be retrieved if necessary from the Event history.
Access: For each asset record, it is possible to choose if you want to share that publicly by creating a Public Asset, or with a select group of “friendly” associates by creating a Private asset that is protected by an Access Policy. By sharing publicly, your trail will be verifiable on our Instaproof service by anyone without the need for a DataTrails account.
\ No newline at end of file
+Document Profile pattern is a suggested set of attributes for Assets and Events for recording the life cycle of a document.
Track Documents: Create a very simple Asset structure with minimal attributes to identify the document and additional attributes to store the key metadata, such as a hash of the document.
Collections: If the document is strongly related to another one, consider adding and tracking them all as Events against a single Asset record.
Versions: If the document is a new version of something already stored in DataTrails, then use Events to replace the document’s metadata with the updated version. Any authorized stakeholder fetching the Asset record will automatically get the most up-to-date version, and prior versions can be retrieved if necessary from the Event history.
Access: For each asset record, it is possible to choose if you want to share that publicly by creating a Public Asset, or with a select group of “friendly” associates by creating a Private asset that is protected by an Access Policy. By sharing publicly, your trail will be verifiable on our Instaproof service by anyone without the need for a DataTrails account.
\ No newline at end of file
diff --git a/usecases/bill-of-materials/index.html b/usecases/bill-of-materials/index.html
index b00de87de..5e59dd509 100644
--- a/usecases/bill-of-materials/index.html
+++ b/usecases/bill-of-materials/index.html
@@ -1,4 +1,4 @@
-Bill of Materials - DataTrails
+Bill of Materials - DataTrails
DataTrails
@@ -11,4 +11,4 @@
NTIA SBOM Proof of Concept the need for strong stakeholder community management and a trusted SBOM data sharing mechanism which protects the interests of all parties.
The DataTrails Software Package profile is a set of suggested Asset and Event attributes that offers a solution to this sharing and distribution problem: vendors retain control of their proprietary information and release processes while customers have assured and reliable visibility into their digital supply chain risks with reliable access to current and historical SBOM data for the components they rely on.
As an Asset, a Software Package may hold many different SBOMs over its lifecycle representing the introduction of new releases and versions of the Software Package. Each ‘Release’ is recorded as an Event to capture the known SBOM at the time.
If a particular Software Package has constituent components composed of other Software Package Assets this would be tracked within the SBOM of the component Supplied Software Package, ensuring full traceability across the Supply Chain.
Key to any successful DataTrails integration is keeping the number of Asset attributes manageable and meaningful. Do not add every entry in the SBOM as an Asset attribute. Instead, preserve Asset attributes to carry essential metadata such as final build hashes and assured current versions, and put the full details of each released version in attachments and Events.
Note: There are good standards for storing and exchanging SBOM data such as
SWID/ISO/IEC 19770-2:2015,
Cyclone DX, and
-SPDX. DataTrails recommends adopting standard data formats wherever possible, as these vastly improve interoperability and utility of the data exchanged between DataTrails participants.
SBOM as a living document: As a vendor, try to model each final software product as an Asset, and releases/updates to that software product as Events on that Asset. That way, a single Asset history contains all the patch versions of a pristine build standard.
Link to real assets: In reality, not every machine is going to be patched and running identical versions of software, and certainly not the most up-to-date one. As a user of devices, try to link the SBOM from your vendor to the device by having Asset attributes for the Asset Identity of the vendor-published SBOM and the version installed on the device. That way it is easy to find devices that need attention following an SBOM update.
Access Policies: Always try to avoid proliferating Access Policies and make as few as possible with clear user populations and access rights. Typically, very few parties need to update the SBOM record, but many people will need to read it.
Remember that DataTrails is a shared evidence platform. It is there to help share and publish the SBOM and create the trust and transparency that is demanded of modern systems, to ensure the security of the digital supply chain.
\ No newline at end of file
+SPDX. DataTrails recommends adopting standard data formats wherever possible, as these vastly improve interoperability and utility of the data exchanged between DataTrails participants.
SBOM as a living document: As a vendor, try to model each final software product as an Asset, and releases/updates to that software product as Events on that Asset. That way, a single Asset history contains all the patch versions of a pristine build standard.
Link to real assets: In reality, not every machine is going to be patched and running identical versions of software, and certainly not the most up-to-date one. As a user of devices, try to link the SBOM from your vendor to the device by having Asset attributes for the Asset Identity of the vendor-published SBOM and the version installed on the device. That way it is easy to find devices that need attention following an SBOM update.
Access Policies: Always try to avoid proliferating Access Policies and make as few as possible with clear user populations and access rights. Typically, very few parties need to update the SBOM record, but many people will need to read it.
Remember that DataTrails is a shared evidence platform. It is there to help share and publish the SBOM and create the trust and transparency that is demanded of modern systems, to ensure the security of the digital supply chain.
DataTrails is a powerful and flexible platform enabling users to record Who Did What & When to any content. To get the best out of the DataTrails it is important to model your real-world assets and business processes efficiently into DataTrails
Assets and
-Events.
The three most common patterns are:
Authenticity and Attestation: proving the state of documents and data at a point in time. Also known as ‘Provenance’.
Bill of Materials: tracing the contents and composition of assets.
State Machine and Supply Chains: following the progress of an asset as it moves through a business process or lifecycle states.
\ No newline at end of file
diff --git a/usecases/responsible-ai/index.html b/usecases/responsible-ai/index.html
index 265099645..90c8297f7 100644
--- a/usecases/responsible-ai/index.html
+++ b/usecases/responsible-ai/index.html
@@ -1,4 +1,4 @@
-Responsible AI - DataTrails
+Responsible AI - DataTrails
As AI technologies become more common the need for trust in AI increases at a greater rate. There is a need to trust the AI model, the dataset that trains the AI machine, the statements about governance and compliance made by the AI vendor before you can trust the output of the AI machine.
Responsible AI includes an ethical and legal viewpoint to ensure that AI works for the good of society, fundamental to this is Trust and Transparency.
As consumers of the AI model:
We need to be certain that an AI machine is making decisions that are no worse than those that would be made by a trained and competent human.
We need to know that it has been trained on ‘good’ data, not ‘bad’ data.
We need to know that the system has been designed to be compliant with the correct standards and policies.
We need to know that it will not misuse our personal information.
We need to know that the system is being developed and improved to those same standards.
Above all, we don’t want to take the vendors word for it, they need to prove it!
DataTrails empowers this by providing an immutable lineage record (the data trail) for all aspects of the AI machine which supports responsible and ethical governance, coupled with transparency and traceability of the training data and output analysis. Together these enhance the explainability and interpretability of the AI machine’s output which results in trust and efficient decision making by the user whether that user is a human or another AI machine.
Policy and Standards Compliance: A set of Asset attributes can be created to record the baseline compliance of the AI system. This can include internal policies such as Bias, Discrimination and Copyright statements or external policies such as GDPR and other legal frameworks.
Any policy changes or changes in compliance status can be recorded as an Event to build the immutable record of compliance over time.
The AI Model and the Training Data: The versions of the AI process model, the AI machine software and of the Training datasets could also be recorded as Asset attributes. Other things to include could be changes to the Training model and any manual Training decisions that influence the output of the AI machine.
-Recording updates as Events will transparently record the version history of the working components of the AI system as it is developed and improved.
Access Policies: Use Access policies to enable fine-grained control over access to the data. Access Policies provide stakeholders with the transparent access to the untampered provenance record that they need to be able to make decisions and gain trust in the system.
\ No newline at end of file
+Recording updates as Events will transparently record the version history of the working components of the AI system as it is developed and improved.
Access Policies: Use Access policies to enable fine-grained control over access to the data. Access Policies provide stakeholders with the transparent access to the untampered provenance record that they need to be able to make decisions and gain trust in the system.
Tracking and tracing the lifecycle of physical Assets - from IoT Devices (embedded sensors, handheld equipment) to a whole distribution depot - is a key strength of DataTrails. The ability to collect and examine the entire life history of critical Assets - their provenance - is crucial to building secure and trustworthy systems.
This also applies to digital assets such as software applications, equipment firmware, images and documents. Every item involved in the supply chain has a lifecycle.
Knowing what state an asset is in, whether or not it is compliant with organizational policy, and whether it needs any attention right now can help a connected system run smoothly. This eliminates the mundane in lifecycle management and allows expert resources to focus only on those parts of the estate that need attention.
Build the Asset over time: The Asset lifecycle covers its entire life, from design and build to procurement and use, and finally disposal. During this time the Asset evolves and develops new properties and characteristics which are not necessarily foreseeable at creation time. DataTrails supports the addition of new properties at any time in the lifecycle so there is no need to design and fill in everything up-front. Start with a simple - even empty - Asset and let DataTrails track and trace the new properties as they naturally occur.
Verify and confirm security data: For digital Assets, a lot of the effort spent on lifecycle management will be spent on software and firmware management. DataTrails’s ‘Witness Statement’ approach to creating Asset histories enables statements of intent to be recorded alongside ground truths. For example, a claimed software update next to a digitally signed platform attestation proving that it was done.
Access Policies: Always try to avoid proliferating Access Policies and make as few as possible with clear user populations and access rights. Generally, all parties will need read access to all the Events in the Asset history but it may be convenient to restrict Event write access to mirror real-world approvers and actors.
Compliance Policies: If a device has a mandatory maintenance schedule (security updates, sensor calibration) then this can be monitored and recorded using a compliance policy.
Tracking and tracing the lifecycle of physical Assets - from IoT Devices (embedded sensors, handheld equipment) to a whole distribution depot - is a key strength of DataTrails. The ability to collect and examine the entire life history of critical Assets - their provenance - is crucial to building secure and trustworthy systems.
This also applies to digital assets such as software applications, equipment firmware, images and documents. Every item involved in the supply chain has a lifecycle.
Knowing what state an asset is in, whether or not it is compliant with organizational policy, and whether it needs any attention right now can help a connected system run smoothly. This eliminates the mundane in lifecycle management and allows expert resources to focus only on those parts of the estate that need attention.
Build the Asset over time: The Asset lifecycle covers its entire life, from design and build to procurement and use, and finally disposal. During this time the Asset evolves and develops new properties and characteristics which are not necessarily foreseeable at creation time. DataTrails supports the addition of new properties at any time in the lifecycle so there is no need to design and fill in everything up-front. Start with a simple - even empty - Asset and let DataTrails track and trace the new properties as they naturally occur.
Verify and confirm security data: For digital Assets, a lot of the effort spent on lifecycle management will be spent on software and firmware management. DataTrails’s ‘Witness Statement’ approach to creating Asset histories enables statements of intent to be recorded alongside ground truths. For example, a claimed software update next to a digitally signed platform attestation proving that it was done.
Access Policies: Always try to avoid proliferating Access Policies and make as few as possible with clear user populations and access rights. Generally, all parties will need read access to all the Events in the Asset history but it may be convenient to restrict Event write access to mirror real-world approvers and actors.
Compliance Policies: If a device has a mandatory maintenance schedule (security updates, sensor calibration) then this can be monitored and recorded using a compliance policy.
“Multi-party business processes” and “Asset lifecycle tracing” are examples of a more general pattern: Supply Chain Handling.
The ‘State Machine’ and ‘Lifecycle Tracing’ pattens are very similar, but the former puts a greater emphasis on modeling and tracing the Events while the latter concentrates more on the evolving state of the Assets. Combining these concepts makes it possible to easily trace complex multi-party supply chains without stakeholders having to adapt to each other’s ways of working. Everyone participates on their own terms using their own tools and processes, and DataTrails bridges the gap to make data available where it is needed.
The Chain of Custody is a documented record of the people or entities that physically or digitally handle a product as it moves from constituent parts to the end customer.
By combining all three, to complete the Supply Chain, DataTrails allows you to:
Enable global visibility to all stakeholders
Provide continuous data assurance for accessibility, integrity and resilience
Integrate with physical items and devices in a platform agnostic way
Comply with internal and external regulatory standards
The DataTrails platform records who did what when (and where when appropriate) to build an immutable and auditable account of the entire history of an product as it passes through the supply chain. This is the Data Trail.
The platform allows multi-party sharing and visibility of supply chain data which empowers trusted data exchange and verification. Supply chain partners have a single source of truth that gives them confidence that decisions are made by the right people, at the right step of the process, using the right data and with confidence that the data is the correct version and is untampered.
It also provides proof of the ownership and operational status of both digital and physical assets and enhances statements of compliance and quality assurance.
Custom Attributes: A core set of attributes can be created specifically to suit each asset and event type. DataTrails has the flexibility to allow these to be modified as the business needs develop over time. They are not set in stone.
GIS position information: Make good use of the arc_gis_* attributes of Events in order to trace Where Who Did What When. Remember that physical environment can make a lot of difference to the virtual security of your Assets.
Access Policies 1: Always try to avoid proliferating Access Policies and make as few as possible with clear user populations and access rights. Nonetheless, complete supply chain operations are complex and thought must be given to Access Policy configuration to account for changes of custody.
Access Policies 2: Consider how far up or down the supply chain visibility should be offered. For example, a customer/operator should be able to see manufacturing data but the manufacturer may or may not be entitled to see usage data.
“Multi-party business processes” and “Asset lifecycle tracing” are examples of a more general pattern: Supply Chain Handling.
The ‘State Machine’ and ‘Lifecycle Tracing’ pattens are very similar, but the former puts a greater emphasis on modeling and tracing the Events while the latter concentrates more on the evolving state of the Assets. Combining these concepts makes it possible to easily trace complex multi-party supply chains without stakeholders having to adapt to each other’s ways of working. Everyone participates on their own terms using their own tools and processes, and DataTrails bridges the gap to make data available where it is needed.
The Chain of Custody is a documented record of the people or entities that physically or digitally handle a product as it moves from constituent parts to the end customer.
By combining all three, to complete the Supply Chain, DataTrails allows you to:
Enable global visibility to all stakeholders
Provide continuous data assurance for accessibility, integrity and resilience
Integrate with physical items and devices in a platform agnostic way
Comply with internal and external regulatory standards
The DataTrails platform records who did what when (and where when appropriate) to build an immutable and auditable account of the entire history of an product as it passes through the supply chain. This is the Data Trail.
The platform allows multi-party sharing and visibility of supply chain data which empowers trusted data exchange and verification. Supply chain partners have a single source of truth that gives them confidence that decisions are made by the right people, at the right step of the process, using the right data and with confidence that the data is the correct version and is untampered.
It also provides proof of the ownership and operational status of both digital and physical assets and enhances statements of compliance and quality assurance.
Custom Attributes: A core set of attributes can be created specifically to suit each asset and event type. DataTrails has the flexibility to allow these to be modified as the business needs develop over time. They are not set in stone.
GIS position information: Make good use of the arc_gis_* attributes of Events in order to trace Where Who Did What When. Remember that physical environment can make a lot of difference to the virtual security of your Assets.
Access Policies 1: Always try to avoid proliferating Access Policies and make as few as possible with clear user populations and access rights. Nonetheless, complete supply chain operations are complex and thought must be given to Access Policy configuration to account for changes of custody.
Access Policies 2: Consider how far up or down the supply chain visibility should be offered. For example, a customer/operator should be able to see manufacturing data but the manufacturer may or may not be entitled to see usage data.
\ No newline at end of file
diff --git a/usecases/sc-state-machine/index.html b/usecases/sc-state-machine/index.html
index 7c698d4d3..fd61dd087 100644
--- a/usecases/sc-state-machine/index.html
+++ b/usecases/sc-state-machine/index.html
@@ -1,4 +1,4 @@
-Supply Chain: Process Governance and Modelling - DataTrails
+Supply Chain: Process Governance and Modelling - DataTrails
A common pattern for tracking an Asset lifecycle is the State Machine pattern for Multi-party business processes. This is a good choice for multi-stakeholder process modelling, particularly where the order of operations is important or activities are triggered by actions of partners. Tracing multi-stakeholder business processes in DataTrails not only ensures transparency and accountability among parties, but is also faster and more reliable than typical cross-organization data sharing and process management involving phone calls and spreadsheets.
Modelling such systems in DataTrails can help to rapidly answer questions like “are my processes running smoothly?”, “do I need to act?”, and “has this asset been correctly managed?”. In audit situations, the Asset histories also allow stakeholders to look back in time and ask “who knew what at the time? Could process violations have been detected earlier?”
This pattern uses a purely virtual Asset to represent a policy or process and coordinate movement through that process, complete with multi-party inputs and approvals. The emphasis here is on Events rather than Asset attributes: What Happened? Who Was There? What evidence was used to decide to move to the next sage of the process?
Keep the Asset simple: This model typically uses mostly non-modifying Events: “what happened” is more important than “what does this Asset look like?”. Use Asset attributes only to clearly identify the business process and store its current state. Otherwise, concentrate on recording the Who Did What When in detailed Event attributes.
Map the business process: DataTrails is here to support business operations, not disturb them. Try to define one Event type for each stage of the process, so decisions and artifacts can be recorded naturally and completely during normal operations. In a mature business, there may be formal documents such as a Process Map (PM), Business Process Model (BPM) or Universal Modeling Language description of the process, its steps, and its approvers. Use this as a base if it is available.
Record decisions clearly: Future decisions will depend on the evidence of past ones. Make sure that all relevant information is recorded in Event records in the right format for the intended consumer: if decisions are made by humans, rich attachments are a good option. If software or AI are involved, then Event attributes are often a stronger choice.
Access Policies: Always try to avoid proliferating Access Policies and make as few as possible with clear user populations and access rights. Generally, all parties will need read access to all the Events in the Asset history, but it may be convenient to restrict Event write access to mirror real-world approvers and actors.
Compliance Policies: If the process must meet recognized standards and is subject to regular audits these can be monitored and recorded using a compliance policy.
A common pattern for tracking an Asset lifecycle is the State Machine pattern for Multi-party business processes. This is a good choice for multi-stakeholder process modelling, particularly where the order of operations is important or activities are triggered by actions of partners. Tracing multi-stakeholder business processes in DataTrails not only ensures transparency and accountability among parties, but is also faster and more reliable than typical cross-organization data sharing and process management involving phone calls and spreadsheets.
Modelling such systems in DataTrails can help to rapidly answer questions like “are my processes running smoothly?”, “do I need to act?”, and “has this asset been correctly managed?”. In audit situations, the Asset histories also allow stakeholders to look back in time and ask “who knew what at the time? Could process violations have been detected earlier?”
This pattern uses a purely virtual Asset to represent a policy or process and coordinate movement through that process, complete with multi-party inputs and approvals. The emphasis here is on Events rather than Asset attributes: What Happened? Who Was There? What evidence was used to decide to move to the next sage of the process?
Keep the Asset simple: This model typically uses mostly non-modifying Events: “what happened” is more important than “what does this Asset look like?”. Use Asset attributes only to clearly identify the business process and store its current state. Otherwise, concentrate on recording the Who Did What When in detailed Event attributes.
Map the business process: DataTrails is here to support business operations, not disturb them. Try to define one Event type for each stage of the process, so decisions and artifacts can be recorded naturally and completely during normal operations. In a mature business, there may be formal documents such as a Process Map (PM), Business Process Model (BPM) or Universal Modeling Language description of the process, its steps, and its approvers. Use this as a base if it is available.
Record decisions clearly: Future decisions will depend on the evidence of past ones. Make sure that all relevant information is recorded in Event records in the right format for the intended consumer: if decisions are made by humans, rich attachments are a good option. If software or AI are involved, then Event attributes are often a stronger choice.
Access Policies: Always try to avoid proliferating Access Policies and make as few as possible with clear user populations and access rights. Generally, all parties will need read access to all the Events in the Asset history, but it may be convenient to restrict Event write access to mirror real-world approvers and actors.
Compliance Policies: If the process must meet recognized standards and is subject to regular audits these can be monitored and recorded using a compliance policy.