-
Notifications
You must be signed in to change notification settings - Fork 14
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* Simplify Gradle functions for managing build-time plugins * Move core services onto new plugin build system * Turn off default plugins flag * Move deploy-metadb and -lib-db onto new plugin handling * Set plugin dependencies (pre work on aws-storage) * Switch to AWS SDK V2 * Stub AWS storage plugin * Add AWS storage plugin to build config, disable AWS config for now (on AWS V1 API) * Use data context instead of exec context for file reader / writer method in IFileStorage * Stub IFileStorage methods in S£ object storage * Add integration testing (stub) in CI for S3 storage * Add integration tests for data round trip * Set up integration testing workflow for S3 testing * Do not include logging in storage integration config * Use different H2 DB path to let deploy-metadb work in integration * Let platform test setup create the metadb schema * Let platform test setup create the metadb schema * Integration tests for tenant separation test * Integration tests for file round trip and operations * Integration tests for data operations * Rename integration test config files for int-metadb tests * Rename stability test * Expose storage tests as a test suite that can be used across implementations * Set up storage operations test for AWS storage * Move storage errors class to the main common storage package * Move exception class mapping for local storage errors into a specialized class * Run storage test suite for S3 storage impl * Move storage operation constants into IFileStorage (share across impls) * Move generic handling of known exceptions in storage error handler * Fix a few storage ops tests to allow running with one storage instance for the whole suite * Allow more settings to be defined in PlatformTest * Trace logging in core codecs * Explicit start and stop for storage, pass in service ELG * Change default number of service threads in data service * Add Netty NIO options lib to AWS storage plugin * Update data plugin test suite for S3 * S3 storage plugin rough work * Implement S3 storage in the runtime (this is temporary, we should switch to arrow file system + fsspec) * Fix handling of leading slash in S3 object storage root path * Update CI configuration (S3 testing not available in CI yet) * Packaging for AWS plugin * Compliance fixes * Compliance fixes * Compliance fixes * Filter out plugin JARs that are already included as part of the TRAC platform * Allow MIT-0 license * Quick documentation on AWS S3 storage * Handle plugin load failures * Supply unit test config for slow unit (integration) tests * Handle plugin load failures
- Loading branch information
Martin Traverse
authored
Dec 5, 2022
1 parent
84dc604
commit 05bd5d3
Showing
71 changed files
with
3,263 additions
and
599 deletions.
There are no files selected for viewing
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,104 @@ | ||
# Copyright 2022 Accenture Global Solutions Limited | ||
# | ||
# Licensed under the Apache License, Version 2.0 (the "License"); | ||
# you may not use this file except in compliance with the License. | ||
# You may obtain a copy of the License at | ||
# | ||
# http://www.apache.org/licenses/LICENSE-2.0 | ||
# | ||
# Unless required by applicable law or agreed to in writing, software | ||
# distributed under the License is distributed on an "AS IS" BASIS, | ||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
# See the License for the specific language governing permissions and | ||
# limitations under the License. | ||
|
||
config: | ||
secret.type: PKCS12 | ||
secret.url: secrets.p12 | ||
|
||
|
||
platformInfo: | ||
environment: TEST_ENVIRONMENT | ||
production: false | ||
deploymentInfo: | ||
region: UK | ||
|
||
|
||
authentication: | ||
jwtIssuer: http://localhost/ | ||
jwtExpiry: 7200 | ||
|
||
|
||
# Stick with H2 database for storage integration testing | ||
metadata: | ||
format: PROTO | ||
database: | ||
protocol: JDBC | ||
properties: | ||
dialect: H2 | ||
jdbcUrl: ${TRAC_DIR}/trac.meta | ||
h2.user: trac | ||
h2.pass: trac | ||
h2.schema: public | ||
pool.size: 10 | ||
pool.overflow: 5 | ||
|
||
|
||
storage: | ||
|
||
defaultBucket: STORAGE_INTEGRATION | ||
defaultFormat: ARROW_FILE | ||
|
||
buckets: | ||
|
||
STORAGE_INTEGRATION: | ||
protocol: S3 | ||
properties: | ||
region: ${TRAC_AWS_REGION} | ||
bucket: ${TRAC_AWS_BUCKET} | ||
path: int-storage-s3 | ||
accessKeyId: ${TRAC_AWS_ACCESS_KEY_ID} | ||
secretAccessKey: ${TRAC_AWS_SECRET_ACCESS_KEY} | ||
|
||
|
||
repositories: | ||
UNIT_TEST_REPO: | ||
protocol: git | ||
properties: | ||
repoUrl: ${CURRENT_GIT_ORIGIN} | ||
|
||
|
||
executor: | ||
protocol: LOCAL | ||
properties: | ||
venvPath: ${TRAC_EXEC_DIR}/venv | ||
|
||
|
||
instances: | ||
|
||
meta: | ||
- scheme: http | ||
host: localhost | ||
port: 8081 | ||
|
||
data: | ||
- scheme: http | ||
host: localhost | ||
port: 8082 | ||
|
||
orch: | ||
- scheme: http | ||
host: localhost | ||
port: 8083 | ||
|
||
|
||
services: | ||
|
||
meta: | ||
port: 8081 | ||
|
||
data: | ||
port: 8082 | ||
|
||
orch: | ||
port: 8083 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -16,3 +16,4 @@ Deployment | |
platform | ||
metadata_store | ||
authentication | ||
storage |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,72 @@ | ||
|
||
Storage Configuration | ||
===================== | ||
|
||
|
||
Local Storage | ||
------------- | ||
|
||
Local storage is available in the base platform and does not require installing any plugins. | ||
For instructions on setting up local storage, see the | ||
:doc:`sandbox quick start guide <sandbox>` | ||
|
||
AWS S3 Storage | ||
-------------- | ||
|
||
You will need to set up an S3 bucket, and an IAM user with permissions to access that bucket. | ||
These are the permissions that need to be assigned to the bucket. | ||
|
||
.. code-block:: json | ||
{ | ||
"Version": "2012-10-17", | ||
"Statement": [ | ||
{ | ||
"Sid": "ListObjectsInBucket", | ||
"Effect": "Allow", | ||
"Principal": { | ||
"AWS": "arn:aws:iam::<aws_account_id>:user/<iam_user>" | ||
}, | ||
"Action": "s3:ListBucket", | ||
"Resource": "arn:aws:s3:::<bucket_name>" | ||
}, | ||
{ | ||
"Sid": "AllObjectActions", | ||
"Effect": "Allow", | ||
"Principal": { | ||
"AWS": "arn:aws:iam::<aws_account_id>:user/<iam_user>" | ||
}, | ||
"Action": [ | ||
"s3:*Object", | ||
"s3:*ObjectAttributes" | ||
], | ||
"Resource": "arn:aws:s3:::<bucket_name>/*" | ||
} | ||
] | ||
} | ||
To install the AWS storage plugin, download the plugins package from the latest release on the | ||
`release page <https://github.com/finos/tracdap/releases>`_. Inside the plugins package you | ||
will find a folder for the AWS storage plugin, copy the contents of this folder into the *plugins* | ||
folder of your TRAC D.A.P. installation. | ||
|
||
You will then be able to configure an S3 storage instance in your platform configuration. The region, | ||
bucket name and access key properties are required. | ||
|
||
The *path* property is optional, if specified it will be used as a prefix for all objects stored in the bucket. | ||
TRAC follows the convention of using path-like object keys, so backslashes can be used in the path prefix if desired. | ||
|
||
.. code-block:: yaml | ||
storage: | ||
buckets: | ||
TEST_PLUGIN: | ||
protocol: S3 | ||
properties: | ||
region: <aws_region> | ||
bucket: <aws_bucket_name> | ||
path: <storage_prefix> | ||
accessKeyId: <aws_access_key_id> | ||
secretAccessKey: <aws_secret_access_key> |
Oops, something went wrong.