This Solution is a WORK IN PROGRESS
Oracle Cloud Infrastructure (OCI) Full Stack Disaster Recovery orchestrates the transition of compute, database, and storage between OCI regions from around the globe with a single click.
Visit the official Full Stack Disaster Recovery Documentation.
Full Stack Disaster Recovery (FSDR) overcome two main challenges:
- Increasing complexity: there is a constant increase in complexity for infrastructure, database management and application that makes Disaster Recovery difficult.
- Manual scripts and jobs are problematic: current manual processes are offer error-prone, unreliable, time-consuming and required specialized skill sets. More problematic if everything has to happen in the middle of a disaster.
Full Stack Disaster Recovery fits into the Oracle Cloud Maximum Availability Architecture based on application criticality classified in Bronze, Silver, Gold and Platinum. Find more information in Oracle MAA Reference Architectures.
For Oracle SaaS applications check MAA Best Practices - Oracle Applications.
Businesses with existing applications using Oracle Database Cloud Services and compute can create a Disaster Recovery plan with OCI Full Stack Disaster Recovery (FSDR).
The example is an Active-Pasive DR.
The application is composed of an Oracle Exadata Database Dedicated or Autonomous Database Shared/Dedicated and a Java REST API exposed through a Load Balancer.
For more details check NOTES
- ADB-S
- FSDR infra
- Data Guard
- Change Architecture to include ADB-S, ADB-D, or ExaDB-D, etc.
- Simulate Disaster
- Rsync
- Include constant synthetic workload
- Add Object Storage in the DR sync
- Support ADB-D
- Support ExaDB-D
- Logging Analytics
- Include OCI Notification and OCI Events to get notified by email on switchover/failover
- Vault integration
- Include OCI Vault secret for Oracle Database
cd src/backend
./gradlew clean bootJar
cd ../..
Answer all the questions from setenv.mjs
script:
zx scripts/setenv.mjs
Generate the terraform.tfvars
file:
zx scripts/tfvars.mjs
Change to the terraform folder:
cd deploy/tf
Terraform init:
terraform init
Terraform Apply:
Auto approve only for demo porpoise. Otherwise, use Terraform
plan
.
terraform apply -auto-approve
Come back to the root folder
cd ../..
Execute a request in both regions (both Load Balancers IP addresses from the Terraform output)
zx scripts/request.mjs -h LOAD_BALANCER_IP_ADDRESS
Running into problems? SSH into the machines.
Run the creation of OCI Bastion Session to connect with a Managed SSH connection
zx scripts/bastion-session.mjsPick the region and the compute instance and copy/paste the SSH command.
SELECT
REQ.REQUEST_DATE "Creation Date",
RES.REGION "Region",
RES.STATUS "Status",
RES.ERROR_MESSAGE "Error"
FROM
RESPONSES RES
INNER JOIN REQUESTS REQ ON RES.REQUEST_ID = REQ.ID
ORDER BY
REQ.REQUEST_DATE DESC;
This project uses K6 to test the deployment.
To Install K6 follow this link Get Started > Installation.
k6 run client/request.js
Change to the terraform folder:
cd deploy/tf
Terraform destroy:
Auto approve only for demo porpoise.
terraform destroy -auto-approve
Come back to the root folder
cd ../..
To clean config files and auxiliary files (SSH keys, certificates, etc):
zx scripts/clean.mjs
Clean the Java application:
cd src/backend
./gradlew clean
cd ../..