From b5cd87485bfbf7c06893e6ab10fe6e81e9a7fa9c Mon Sep 17 00:00:00 2001 From: gvasquezvargas Date: Fri, 26 Jul 2024 09:07:56 +0200 Subject: [PATCH 01/15] Remove version-specific versions --- product_docs/docs/efm/4/efm_deploy_arch/03_efm_vip.mdx | 2 +- .../efm/4/efm_deploy_arch/04_efm_client_connect_failover.mdx | 2 +- product_docs/docs/efm/4/efm_deploy_arch/05_efm_pgbouncer.mdx | 4 ++-- product_docs/docs/efm/4/efm_deploy_arch/06_efm_pgpool.mdx | 4 ++-- 4 files changed, 6 insertions(+), 6 deletions(-) diff --git a/product_docs/docs/efm/4/efm_deploy_arch/03_efm_vip.mdx b/product_docs/docs/efm/4/efm_deploy_arch/03_efm_vip.mdx index b8a6baf4ea0..5d6f83129e2 100644 --- a/product_docs/docs/efm/4/efm_deploy_arch/03_efm_vip.mdx +++ b/product_docs/docs/efm/4/efm_deploy_arch/03_efm_vip.mdx @@ -19,7 +19,7 @@ on three servers as following: Systems |Components ------------------------------------------|----------------------------------------------------------------------------- - PG Primary, PG Standby1, and PG Standby2 | Primary / standby nodes running Advanced Server 13 and Failover Manager 4.2 + PG Primary, PG Standby1, and PG Standby2 | Primary / standby nodes running Advanced Server and Failover Manager ### Specifying VIP diff --git a/product_docs/docs/efm/4/efm_deploy_arch/04_efm_client_connect_failover.mdx b/product_docs/docs/efm/4/efm_deploy_arch/04_efm_client_connect_failover.mdx index 0ce6778c930..2b018541739 100644 --- a/product_docs/docs/efm/4/efm_deploy_arch/04_efm_client_connect_failover.mdx +++ b/product_docs/docs/efm/4/efm_deploy_arch/04_efm_client_connect_failover.mdx @@ -24,7 +24,7 @@ Install and configure Advanced Server and Failover Manager on three servers as f Systems | Components -------------------------------------------|----------------------------------------------------------------------------- - PG Primary, PG Standby1, and PG Standby2 | Primary or standby nodes running Advanced Server 13 and Failover Manager 4.2 + PG Primary, PG Standby1, and PG Standby2 | Primary or standby nodes running Advanced Server and Failover Manager You don't need to configure the virtual IP configuration in `efm.properties` (`virtual.ip`, diff --git a/product_docs/docs/efm/4/efm_deploy_arch/05_efm_pgbouncer.mdx b/product_docs/docs/efm/4/efm_deploy_arch/05_efm_pgbouncer.mdx index 36fc834e77f..ba253231c1b 100644 --- a/product_docs/docs/efm/4/efm_deploy_arch/05_efm_pgbouncer.mdx +++ b/product_docs/docs/efm/4/efm_deploy_arch/05_efm_pgbouncer.mdx @@ -53,8 +53,8 @@ Install and configure Advanced Server database, Failover Manager, and EDB PgBoun Systems | Components --------------------| -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- - PgDB srv 1, 2, 3 | Primary / standby node running Advanced Server 13 and Failover Manager 4.2 - PgBouncer srv 1, 2 | PgBouncer node running EDB PgBouncer 1.15. Register these two nodes as targets in the target group. Two is the minimum and is sufficient for most cases. + PgDB srv 1, 2, 3 | Primary / standby node running Advanced Server and Failover Manager + PgBouncer srv 1, 2 | PgBouncer node running EDB PgBouncer. Register these two nodes as targets in the target group. Two is the minimum and is sufficient for most cases. ### Configuring Failover Manager diff --git a/product_docs/docs/efm/4/efm_deploy_arch/06_efm_pgpool.mdx b/product_docs/docs/efm/4/efm_deploy_arch/06_efm_pgpool.mdx index fb7f74a3675..9f0e3e60373 100644 --- a/product_docs/docs/efm/4/efm_deploy_arch/06_efm_pgpool.mdx +++ b/product_docs/docs/efm/4/efm_deploy_arch/06_efm_pgpool.mdx @@ -39,8 +39,8 @@ EDB Pgpool-II as follows: **Systems** | **Components** --------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- - PgDB server 1, server2, and server 3 | Primary / standby node running Advanced Server 13 and Failover Manager 4.2 - EDB Pgpool-II |Pgpool node running EDB Pgpool-II 4.2 in a watchdog configuration. Register these three nodes as targets in the target group. Three is the minimum and is sufficient for most cases. + PgDB server 1, server2, and server 3 | Primary / standby node running Advanced Server and Failover Manager + EDB Pgpool-II |Pgpool node running EDB Pgpool-II in a watchdog configuration. Register these three nodes as targets in the target group. Three is the minimum and is sufficient for most cases. From 8b474ece71b3a822d8ec0fc913abeb633362b41c Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Date: Mon, 29 Jul 2024 09:30:22 +0100 Subject: [PATCH 02/15] Update terminology.mdx quorum --- product_docs/docs/pgd/5/terminology.mdx | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/product_docs/docs/pgd/5/terminology.mdx b/product_docs/docs/pgd/5/terminology.mdx index 2777b771fbe..f57e5284036 100644 --- a/product_docs/docs/pgd/5/terminology.mdx +++ b/product_docs/docs/pgd/5/terminology.mdx @@ -89,7 +89,8 @@ Traditionally, in PostgreSQL, a number of databases running on a single server i #### Quorum -When a [Raft](#replicated-available-fault-tolerance-raft) [consensus](#consensus) is needed by a PGD cluster, a minimum number of voting nodes participating in the vote are needed. This number is called a quorum. For example, with a 5-node cluster, the quorum is 3 nodes in the cluster voting. A consensus is 5/2+1 nodes, 3 nodes voting the same way. If there are only 2 voting nodes, then a consensus is never established. +A quorum is the minimum number of voting processes needed to take place within a distributed vote. It ensures that the decision made has validity. For example, +when a [Raft](#replicated-available-fault-tolerance-raft) [consensus](#consensus) is needed by a PGD cluster, a minimum number of voting nodes participating in the vote are needed. With a 5-node cluster, the quorum is 3 nodes in the cluster voting. A consensus is 5/2+1 nodes, 3 nodes voting the same way. If there are only 2 voting nodes, then a consensus is never established. Quorums are required in PGD for [global locks](/pgd/ddl/ddl-locking/) and Raft decisions. #### Replicated available fault tolerance (Raft) From 21b42d38c183fa9566ac4ee3fa98cfe0b50e4a1a Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Mon, 29 Jul 2024 10:17:10 +0100 Subject: [PATCH 03/15] Fixed link Signed-off-by: Dj Walker-Morgan --- product_docs/docs/pgd/5/terminology.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pgd/5/terminology.mdx b/product_docs/docs/pgd/5/terminology.mdx index f57e5284036..7d595f7ee78 100644 --- a/product_docs/docs/pgd/5/terminology.mdx +++ b/product_docs/docs/pgd/5/terminology.mdx @@ -90,7 +90,7 @@ Traditionally, in PostgreSQL, a number of databases running on a single server i #### Quorum A quorum is the minimum number of voting processes needed to take place within a distributed vote. It ensures that the decision made has validity. For example, -when a [Raft](#replicated-available-fault-tolerance-raft) [consensus](#consensus) is needed by a PGD cluster, a minimum number of voting nodes participating in the vote are needed. With a 5-node cluster, the quorum is 3 nodes in the cluster voting. A consensus is 5/2+1 nodes, 3 nodes voting the same way. If there are only 2 voting nodes, then a consensus is never established. Quorums are required in PGD for [global locks](/pgd/ddl/ddl-locking/) and Raft decisions. +when a [Raft](#replicated-available-fault-tolerance-raft) [consensus](#consensus) is needed by a PGD cluster, a minimum number of voting nodes participating in the vote are needed. With a 5-node cluster, the quorum is 3 nodes in the cluster voting. A consensus is 5/2+1 nodes, 3 nodes voting the same way. If there are only 2 voting nodes, then a consensus is never established. Quorums are required in PGD for [global locks](ddl/ddl-locking/) and Raft decisions. #### Replicated available fault tolerance (Raft) From 02df97eb07594b9ca027ea149f3b3626016b18e7 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Mon, 29 Jul 2024 10:20:11 +0100 Subject: [PATCH 04/15] Clarify language Signed-off-by: Dj Walker-Morgan --- product_docs/docs/pgd/5/terminology.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/product_docs/docs/pgd/5/terminology.mdx b/product_docs/docs/pgd/5/terminology.mdx index 7d595f7ee78..efbc63e12c5 100644 --- a/product_docs/docs/pgd/5/terminology.mdx +++ b/product_docs/docs/pgd/5/terminology.mdx @@ -89,7 +89,7 @@ Traditionally, in PostgreSQL, a number of databases running on a single server i #### Quorum -A quorum is the minimum number of voting processes needed to take place within a distributed vote. It ensures that the decision made has validity. For example, +A quorum is the minimum number of voting nodes needed to participate in a distributed vote. It ensures that the decision made has validity. For example, when a [Raft](#replicated-available-fault-tolerance-raft) [consensus](#consensus) is needed by a PGD cluster, a minimum number of voting nodes participating in the vote are needed. With a 5-node cluster, the quorum is 3 nodes in the cluster voting. A consensus is 5/2+1 nodes, 3 nodes voting the same way. If there are only 2 voting nodes, then a consensus is never established. Quorums are required in PGD for [global locks](ddl/ddl-locking/) and Raft decisions. #### Replicated available fault tolerance (Raft) From d23be0c8d04785f11be9358e5968c11c4bdce868 Mon Sep 17 00:00:00 2001 From: gvasquezvargas Date: Mon, 29 Jul 2024 16:53:11 +0200 Subject: [PATCH 05/15] re-adding the pgpool and pgbouncer versions --- .../docs/efm/4/efm_deploy_arch/05_efm_pgbouncer.mdx | 2 +- product_docs/docs/efm/4/efm_deploy_arch/06_efm_pgpool.mdx | 8 ++++---- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/product_docs/docs/efm/4/efm_deploy_arch/05_efm_pgbouncer.mdx b/product_docs/docs/efm/4/efm_deploy_arch/05_efm_pgbouncer.mdx index ba253231c1b..80a01eec233 100644 --- a/product_docs/docs/efm/4/efm_deploy_arch/05_efm_pgbouncer.mdx +++ b/product_docs/docs/efm/4/efm_deploy_arch/05_efm_pgbouncer.mdx @@ -54,7 +54,7 @@ Install and configure Advanced Server database, Failover Manager, and EDB PgBoun Systems | Components --------------------| -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- PgDB srv 1, 2, 3 | Primary / standby node running Advanced Server and Failover Manager - PgBouncer srv 1, 2 | PgBouncer node running EDB PgBouncer. Register these two nodes as targets in the target group. Two is the minimum and is sufficient for most cases. + PgBouncer srv 1, 2 | PgBouncer node running EDB PgBouncer 1.15. Register these two nodes as targets in the target group. Two is the minimum and is sufficient for most cases. ### Configuring Failover Manager diff --git a/product_docs/docs/efm/4/efm_deploy_arch/06_efm_pgpool.mdx b/product_docs/docs/efm/4/efm_deploy_arch/06_efm_pgpool.mdx index 9f0e3e60373..ac0fe7296b2 100644 --- a/product_docs/docs/efm/4/efm_deploy_arch/06_efm_pgpool.mdx +++ b/product_docs/docs/efm/4/efm_deploy_arch/06_efm_pgpool.mdx @@ -37,10 +37,10 @@ Install and configure Advanced Server database, Failover Manager, and EDB Pgpool-II as follows: - **Systems** | **Components** - --------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- - PgDB server 1, server2, and server 3 | Primary / standby node running Advanced Server and Failover Manager - EDB Pgpool-II |Pgpool node running EDB Pgpool-II in a watchdog configuration. Register these three nodes as targets in the target group. Three is the minimum and is sufficient for most cases. + | **Systems** | **Components** | + |--------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| + | PgDB server 1, server2, and server 3 | Primary / standby node running Advanced Server and Failover Manager | + | EDB Pgpool-II | Pgpool node running EDB Pgpool-II 4.2 in a watchdog configuration. Register these three nodes as targets in the target group. Three is the minimum and is sufficient for most cases. | From ca1bcbe10a3a72fb834ec8406239ffbca6d8a467 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Date: Mon, 29 Jul 2024 16:41:11 +0100 Subject: [PATCH 06/15] Update deploy_options.mdx fix broken link --- product_docs/docs/pge/16/deploy_options.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/product_docs/docs/pge/16/deploy_options.mdx b/product_docs/docs/pge/16/deploy_options.mdx index cd4e196802c..3902933cd39 100644 --- a/product_docs/docs/pge/16/deploy_options.mdx +++ b/product_docs/docs/pge/16/deploy_options.mdx @@ -11,6 +11,6 @@ The deployment options include: - [Installing](installing) on a virtual machine or physical server using native packages -- Deploying it with [EDB Postgres Distributed](/pgd/latest/) using [Trusted Postgres Architect](/pgd/latest/admin-tpa/) +- Deploying it with [EDB Postgres Distributed](/pgd/latest/) using [Trusted Postgres Architect](/pgd/latest/deploy-config/deploy-tpa/) -- Deploying it on [BigAnimal](/biganimal/latest/) with extreme-high-availability cluster types \ No newline at end of file +- Deploying it on [BigAnimal](/biganimal/latest/) with extreme-high-availability cluster types From dea5837d601aed8e7d2d106f77683dda08b50e74 Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan <126472455+djw-m@users.noreply.github.com> Date: Mon, 29 Jul 2024 16:42:51 +0100 Subject: [PATCH 07/15] Update deploy_options.mdx --- product_docs/docs/pge/15/deploy_options.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/product_docs/docs/pge/15/deploy_options.mdx b/product_docs/docs/pge/15/deploy_options.mdx index 4449770f1f7..e4452e3cb8a 100644 --- a/product_docs/docs/pge/15/deploy_options.mdx +++ b/product_docs/docs/pge/15/deploy_options.mdx @@ -11,6 +11,6 @@ The deployment options include: - [Installing](installing) on a virtual machine or physical server using native packages -- Deploying it with [EDB Postgres Distributed](/pgd/latest/) using [Trusted Postgres Architect](/pgd/latest/tpa/) +- Deploying it with [EDB Postgres Distributed](/pgd/latest/) using [Trusted Postgres Architect](/pgd/latest/deploy-config/deploy-tpa/) -- Deploying it on [BigAnimal](/biganimal/latest/) with extreme high availability cluster types \ No newline at end of file +- Deploying it on [BigAnimal](/biganimal/latest/) with extreme high availability cluster types From 3e374108a00c2fc74841254e0b607e5c1f2ac5be Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Tue, 30 Jul 2024 08:27:34 +0100 Subject: [PATCH 08/15] Rename from pgai to aidb (also fix bad formatting in s3 section) Signed-off-by: Dj Walker-Morgan --- ...d.png => aidb-overview-withbackground.png} | 0 advocacy_docs/edb-postgres-ai/ai-ml/index.mdx | 8 +- .../ai-ml/install-tech-preview.mdx | 36 ++++----- .../edb-postgres-ai/ai-ml/overview.mdx | 22 +++--- .../additional_functions.mdx | 18 ++--- .../ai-ml/using-tech-preview/index.mdx | 10 +-- .../working-with-ai-data-in-S3.mdx | 51 ++++++++----- .../working-with-ai-data-in-postgres.mdx | 74 +++++++++---------- .../overview/guide-and-getting-started.mdx | 2 +- .../overview/latest-release-news.mdx | 6 +- .../overview/overview-and-concepts.mdx | 2 +- 11 files changed, 122 insertions(+), 107 deletions(-) rename advocacy_docs/edb-postgres-ai/ai-ml/images/{pgai-overview-withbackground.png => aidb-overview-withbackground.png} (100%) diff --git a/advocacy_docs/edb-postgres-ai/ai-ml/images/pgai-overview-withbackground.png b/advocacy_docs/edb-postgres-ai/ai-ml/images/aidb-overview-withbackground.png similarity index 100% rename from advocacy_docs/edb-postgres-ai/ai-ml/images/pgai-overview-withbackground.png rename to advocacy_docs/edb-postgres-ai/ai-ml/images/aidb-overview-withbackground.png diff --git a/advocacy_docs/edb-postgres-ai/ai-ml/index.mdx b/advocacy_docs/edb-postgres-ai/ai-ml/index.mdx index 0a10fe86383..9a674a1250f 100644 --- a/advocacy_docs/edb-postgres-ai/ai-ml/index.mdx +++ b/advocacy_docs/edb-postgres-ai/ai-ml/index.mdx @@ -3,7 +3,7 @@ title: EDB Postgres AI - AI/ML navTitle: AI/ML indexCards: simple iconName: BrainCircuit -description: How to make use of EDB Postgres AI for AI/ML workloads and using the pgai extension. +description: How to make use of EDB Postgres AI for AI/ML workloads and using the aidb extension. navigation: - overview - install-tech-preview @@ -12,11 +12,11 @@ navigation: EDB Postgres® AI Database is designed to solve all AI data management needs, including storing, searching, and retrieving of AI data. This up-levels Postgres to a database that manages and serves all types of data modalities directly and combines it with its battle-proof strengths as an established Enterprise system of record that manages high-value business data. -In this tech preview, you can use the pgai extension to build a simple retrieval augmented generation (RAG) application in Postgres. +In this tech preview, you can use the aidb extension to build a simple retrieval augmented generation (RAG) application in Postgres. -An [overview](overview) of the pgai extension gives you a high-level understanding of the major functionality available to date. +An [overview](overview) of the aidb extension gives you a high-level understanding of the major functionality available to date. -To get started, you will need to [install the pgai tech preview](install-tech-preview) and then you can start [using the pgai tech preview](using-tech-preview) to build your RAG application. +To get started, you will need to [install the aidb tech preview](install-tech-preview) and then you can start [using the aidb tech preview](using-tech-preview) to build your RAG application. diff --git a/advocacy_docs/edb-postgres-ai/ai-ml/install-tech-preview.mdx b/advocacy_docs/edb-postgres-ai/ai-ml/install-tech-preview.mdx index 719db4771ea..2f4f1e1eaa4 100644 --- a/advocacy_docs/edb-postgres-ai/ai-ml/install-tech-preview.mdx +++ b/advocacy_docs/edb-postgres-ai/ai-ml/install-tech-preview.mdx @@ -1,11 +1,11 @@ --- -title: EDB Postgres AI AI/ML - Installing the pgai tech preview +title: EDB Postgres AI AI/ML - Installing the aidb tech preview navTitle: Installing -description: How to install the EDB Postgres AI AI/ML pgai tech preview and run the container image. +description: How to install the EDB Postgres AI AI/ML aidb tech preview and run the container image. prevNext: true --- -The preview release of pgai is distributed as a self-contained Docker container that runs PostgreSQL and includes all of the pgai dependencies. +The preview release of aidb is distributed as a self-contained Docker container that runs PostgreSQL and includes all of the aidb dependencies. ## Configuring and running the container image @@ -19,46 +19,46 @@ __OUTPUT__ Login Succeeded ``` -Download the pgai container image: +Download the aidb container image: ```shell -docker pull docker.enterprisedb.com/tech-preview/pgai +docker pull docker.enterprisedb.com/tech-preview/aidb __OUTPUT__ ... -Status: Downloaded newer image for docker.enterprisedb.com/tech-preview/pgai:latest -docker.enterprisedb.com/tech-preview/pgai:latest +Status: Downloaded newer image for docker.enterprisedb.com/tech-preview/aidb:latest +docker.enterprisedb.com/tech-preview/aidb:latest ``` -Specify a password to use for Postgres in the environment variable PGPASSWORD. The tech preview container will set up Postgres with this password and use it to connect to it. In bash or zsh set it as follows: +Specify a password to use for Postgres in the environment variable PGPASSWORD. The tech preview container set up Postgres with this password and use it to connect to it. In bash or zsh set it as follows: ```shell export PGPASSWORD= ``` -You can use the pgai extension with encoder LLMs in Open AI or with open encoder LLMs from HuggingFace. If you want to use Open AI you also must provide your API key for that in the OPENAI_API_KEY environment variable: +You can use the aidb extension with encoder LLMs in Open AI or with open encoder LLMs from HuggingFace. If you want to use Open AI you also must provide your API key for that in the OPENAI_API_KEY environment variable: ```shell export OPENAI_API_KEY= ``` -You can use the pgai extension with AI data stored in Postgres tables or on S3 compatible object storage. To work with object storage you need to specify the ACCESS_KEY and SECRET_KEY environment variables:. +You can use the aidb extension with AI data stored in Postgres tables or on S3 compatible object storage. To work with object storage you need to specify the ACCESS_KEY and SECRET_KEY environment variables: ```shell export ACCESS_KEY= export SECRET_KEY= ``` -Start the pgai tech preview container with the following command. It makes the tech preview PostgreSQL database available on local port 15432: +Start the aidb tech preview container with the following command. It makes the tech preview PostgreSQL database available on local port 15432: ```shell -docker run -d --name pgai \ +docker run -d --name aidb \ -e ACCESS_KEY=$ACCESS_KEY \ -e SECRET_KEY=$SECRET_KEY \ -e OPENAI_API_KEY=$OPENAI_API_KEY \ -e POSTGRES_PASSWORD=$PGPASSWORD \ -e PGDATA=/var/lib/postgresql/data/pgdata \ -p 15432:5432 \ - docker.enterprisedb.com/tech-preview/pgai:latest + docker.enterprisedb.com/tech-preview/aidb:latest ``` @@ -70,7 +70,7 @@ If you haven't yet, install the Postgres command-line tools. If you're on a Mac, brew install libpq ``` -Connect to the tech preview PostgreSQL running in the container. Note that this relies on $PGPASSWORD being set - if you're using a different terminal for this part, make sure you re-export the password: +Connect to the tech preview PostgreSQL running in the container. Note that this relies on setting the PGPASSWORD environment variable - if you're using a different terminal for this part, make sure you re-export the password: ```shell psql -h localhost -p 15432 -U postgres postgres @@ -82,10 +82,10 @@ postgres=# ``` -Install the pgai extension: +Install the aidb extension: ```sql -create extension pgai cascade; +create extension aidb cascade; __OUTPUT__ NOTICE: installing required extension "plpython3u" NOTICE: installing required extension "vector" @@ -99,9 +99,9 @@ __OUTPUT__ List of installed extensions Name | Version | Schema | Description ------------+---------+------------+------------------------------------------------------ - pgai | 0.0.1 | public | An extension to do the AIs + aidb | 0.0.2 | public | An extension to do the AIs plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language plpython3u | 1.0 | pg_catalog | PL/Python3U untrusted procedural language - vector | 0.6.0 | public | vector data type and ivfflat and hnsw access methods + vector | 0.7.2 | public | vector data type and ivfflat and hnsw access methods (4 rows) ``` diff --git a/advocacy_docs/edb-postgres-ai/ai-ml/overview.mdx b/advocacy_docs/edb-postgres-ai/ai-ml/overview.mdx index 6aab16fbfc1..b60573afbc2 100644 --- a/advocacy_docs/edb-postgres-ai/ai-ml/overview.mdx +++ b/advocacy_docs/edb-postgres-ai/ai-ml/overview.mdx @@ -1,29 +1,29 @@ --- title: EDB Postgres AI AI/ML - Overview navTitle: Overview -description: Where to start with EDB Postgres AI AI/ML and the pgai tech preview. +description: Where to start with EDB Postgres AI AI/ML and the aidb tech preview. prevNext: True --- -At the heart of EDB Postgres® AI is the EDB Postgres AI database (pgai). This builds on Postgres's flexibility and extends its capability to include storing the vector data of embeddings. +At the heart of EDB Postgres® AI is the EDB Postgres AI database (aidb). This builds on Postgres's flexibility and extends its capability to include storing the vector data of embeddings. -The pgai extension is currently available as a tech preview. It will be continuously extended with new functions. This overview presents the functionality available to date. +The aidb extension is currently available as a tech preview. It will be continuously extended with new functions. This overview presents the functionality available to date. -![PGAI Overview](images/pgai-overview-withbackground.png) +![AIDB Overview](images/aidb-overview-withbackground.png) -pgai introduces the concept of a “retriever” that you can create for a given type and location of AI data. Currently pgai supports unstructured plain text documents as well as a set of image formats. This data can either reside in regular columns of a Postgres table or it can reside in an S3 compatible object storage bucket. +aidb introduces the concept of a “retriever” that you can create for a given type and location of AI data. Currently aidb supports unstructured plain text documents as well as a set of image formats. This data can either reside in regular columns of a Postgres table or it can reside in an S3 compatible object storage bucket. -A retriever encapsulates all processing that is needed to make the AI data in the provided source location searchable and retrievable through similarity. The application just needs to create a retriever via the `pgai.create_retriever()` function. When `auto_embedding=TRUE` is specified the pgai extension will automatically generate embeddings for all the data in the source location. +A retriever encapsulates all processing that is needed to make the AI data in the provided source location searchable and retrievable through similarity. The application just needs to create a retriever via the `aidb.create_retriever()` function. When `auto_embedding=TRUE` is specified the aidb extension will automatically generate embeddings for all the data in the source location. -Otherwise it will be up to the application to request a bulk generation of embeddings using `pgai.refresh_retriever()`. +Otherwise it will be up to the application to request a bulk generation of embeddings using `aidb.refresh_retriever()`. -Auto embedding is currently supported for AI data stored in Postgres tables and it automates the embedding updates using Postgres triggers. You can also combine the two options by using pgai.refresh_retriever() to embed all previously existing data and also setting `auto_embedding=TRUE` to generate embeddings for all new and changed data from now on. +Auto embedding is currently supported for AI data stored in Postgres tables and it automates the embedding updates using Postgres triggers. You can also combine the two options by using aidb.refresh_retriever() to embed all previously existing data and also setting `auto_embedding=TRUE` to generate embeddings for all new and changed data from now on. -All embedding generation, storage, indexing and management is handled by the pgai extension internally. The application just has to specify the encoder LLM that the retriever should be using for this specific data and use case. +All embedding generation, storage, indexing, and management is handled by the aidb extension internally. The application just has to specify the encoder LLM that the retriever should be using for this specific data and use case. -Once a retriever is created and all embeddings are up to date, the application can just use pgai.retrieve() to run a similarity search and retrieval by providing a query input. When the retriever is created for text data, the query input is also a text term. For image retrievers the query input is an image. The pgai retriever makes sure to use the same encoder LLM for the query input, conducts a similarity search and finally returns the ranked list of similar data from the source location. +Once a retriever is created and all embeddings are up to date, the application can just use aidb.retrieve() to run a similarity search and retrieval by providing a query input. When the retriever is created for text data, the query input is also a text term. For image retrievers the query input is an image. The aidb retriever makes sure to use the same encoder LLM for the query input, conducts a similarity search and finally returns the ranked list of similar data from the source location. -pgai currently supports a broad list of open encoder LLMs from HuggingFace as well as a set of OpenAI encoders. Consult the list of supported encoder LLMs in the pgai.encoders meta table. HuggingFace LLMs are running locally on the Postgres node, while OpenAI encoders involve a call out to the OpenAI cloud service. +aidb currently supports a broad list of open encoder LLMs from HuggingFace as well as a set of OpenAI encoders. Consult the list of supported encoder LLMs in the aidb.encoders meta table. HuggingFace LLMs are running locally on the Postgres node, while OpenAI encoders involve a call out to the OpenAI cloud service. diff --git a/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/additional_functions.mdx b/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/additional_functions.mdx index ddf57844f58..f8d2eb568c7 100644 --- a/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/additional_functions.mdx +++ b/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/additional_functions.mdx @@ -1,15 +1,15 @@ --- -title: Additional functions and standalone embedding in pgai +title: Additional functions and standalone embedding in aidb navTitle: Additional functions -description: Other pgai extension functions and how to generate embeddings for images and text. +description: Other aidb extension functions and how to generate embeddings for images and text. --- ## Standalone embedding -Use the `generate_single_image_embedding` function to get embeddings for the given image. Currently, `model_provider` can only be `openai` or `huggingface`. You can check the list of valid embedding models and model providers from the Encoders Supported PGAI section. +Use the `generate_single_image_embedding` function to get embeddings for the given image. Currently, `model_provider` can only be `openai` or `huggingface`. You can check the list of valid embedding models and model providers from the Encoders Supported AIDB section. ```sql -SELECT pgai.generate_single_image_embedding( +SELECT aidb.generate_single_image_embedding( 'clip-vit-base-patch32', -- embedding model name 'openai', -- model provider 'https://s3.us-south.cloud-object-storage.appdomain.cloud', -- S3 endpoint @@ -26,7 +26,7 @@ __OUTPUT__ Use the `generate_text_embedding` function to get embeddings for the given image. Currently, the `model_provider` can only be `openai` or `huggingface`. ```sql -SELECT pgai.generate_text_embedding( +SELECT aidb.generate_text_embedding( 'text-embedding-3-small', -- embedding model name 'openai', -- model provider 0, -- dimensions, setting 0 will replace with the default value in encoder's table @@ -41,10 +41,10 @@ __OUTPUT__ ## Supported encoders -You can check the list of valid embedding models and model providers from pgai.encoders table +You can check the list of valid embedding models and model providers from aidb.encoders table ```sql -SELECT provider, count(*) encoder_model_count FROM pgai.encoders group by (provider); +SELECT provider, count(*) encoder_model_count FROM aidb.encoders group by (provider); __OUTPUT__ provider | encoder_model_count -------------+--------------------- @@ -55,11 +55,11 @@ __OUTPUT__ ## Available functions -You can find the complete list of currently available functions of the pgai extension by selecting from `information_schema.routines` any `routine_name` belonging to the pgai routine schema: +You can find the complete list of currently available functions of the aidb extension by selecting from `information_schema.routines` any `routine_name` belonging to the aidb routine schema: ``` -SELECT routine_name from information_schema.routines WHERE routine_schema='pgai'; +SELECT routine_name from information_schema.routines WHERE routine_schema='aidb'; __OUTPUT__ routine_name --------------------------------- diff --git a/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/index.mdx b/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/index.mdx index 981d119b782..7c13a72b15c 100644 --- a/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/index.mdx +++ b/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/index.mdx @@ -1,5 +1,5 @@ --- -title: EDB Postgres AI AI/ML - Using the pgai tech preview +title: EDB Postgres AI AI/ML - Using the aidb tech preview navTitle: Using description: Using the EDB Postgres AI AI/ML tech preview to build a simple retrieval augmented generation (RAG) application in Postgres. navigation: @@ -8,9 +8,9 @@ navigation: - standard-encoders --- -This section shows how you can use your [newly installed pgai tech preview](install-tech-preview) to retrieve and generate AI data in Postgres. +This section shows how you can use your [newly installed aidb tech preview](install-tech-preview) to retrieve and generate AI data in Postgres. -* [Working with AI data in Postgres](working-with-ai-data-in-postgres) details how to use the pgai extension to work with AI data stored in Postgres tables. -* [Working with AI data in S3](working-with-ai-data-in-s3) covers how to use the pgai extension to work with AI data stored in S3 compatible object storage. -* [Standard encoders](standard-encoders) goes through the standard encoder LLMs that are supported by the pgai extension. +* [Working with AI data in Postgres](working-with-ai-data-in-postgres) details how to use the aidb extension to work with AI data stored in Postgres tables. +* [Working with AI data in S3](working-with-ai-data-in-s3) covers how to use the aidb extension to work with AI data stored in S3 compatible object storage. +* [Standard encoders](standard-encoders) goes through the standard encoder LLMs that are supported by the aidb extension. diff --git a/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/working-with-ai-data-in-S3.mdx b/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/working-with-ai-data-in-S3.mdx index cd579a41a89..ce92940ddb2 100644 --- a/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/working-with-ai-data-in-S3.mdx +++ b/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/working-with-ai-data-in-S3.mdx @@ -1,19 +1,19 @@ --- title: Working with AI data stored in S3-compatible object storage navTitle: Working with AI data in S3 -description: How to work with AI data stored in S3-compatible object storage using the pgai extension. +description: How to work with AI data stored in S3-compatible object storage using the aidb extension. --- -The following examples demonstrate how to use the pgai functions with S3-compatible object storage. You can use the following examples as is, because they use a publicly accessible example S3 bucket. Or you can prepare your own S3 compatible object storage bucket with some test data and try the steps in this section with that data. +The following examples demonstrate how to use the aidb functions with S3-compatible object storage. You can use the following examples as is, because they use a publicly accessible example S3 bucket. Or you can prepare your own S3 compatible object storage bucket with some test data and try the steps in this section with that data. These examples also use image data and an appropriate image encoder LLM instead of text data. You could, though, use plain text data on object storage similar to the examples in [Working with AI data in Postgres](working-with-ai-data-in-postgres). ### Creating a retriever -Start by creating a retriever for images stored on s3-compatible object storage as the source using the `pgai.create_s3_retriever` function. +Start by creating a retriever for images stored on s3-compatible object storage as the source using the `aidb.create_s3_retriever` function. ``` -pgai.create_s3_retriever( +aidb.create_s3_retriever( retriever_name text, schema_name text, model_name text, @@ -24,18 +24,18 @@ pgai.create_s3_retriever( ) ``` -* The retriever_name is used to identify and reference the retriever; set it to `image_embeddings` for this example. -* The schema_name is the schema where the source table is located. -* The model_name is the name of the embeddings encoder model for similarity data; set it to [`clip-vit-base-patch32`](https://huggingface.co/openai/clip-vit-base-patch32) to use the open encoder model for image data from HuggingFace. -* The data_type is the type of data in the source table, which could be either `img` or `text`; set it to `img`. -* The bucket_name is the name of the S3 bucket where the data is stored; set this to `torsten`. -* The prefix is the prefix of the objects in the bucket; set this to an empty string because you want all the objects in that bucket. -* The endpoint_url is the URL of the S3 endpoint; set that to `https://s3.us-south.cloud-object-storage.appdomain.cloud` to access the public example bucket. +* The `retriever_name` is used to identify and reference the retriever; set it to `image_embeddings` for this example. +* The `schema_name` is the schema where the source table is located. +* The `model_name` is the name of the embeddings encoder model for similarity data; set it to [`clip-vit-base-patch32`](https://huggingface.co/openai/clip-vit-base-patch32) to use the open encoder model for image data from HuggingFace. +* The `data_type` is the type of data in the source table, which could be either `img` or `text`; set it to `img`. +* The `bucket_name` is the name of the S3 bucket where the data is stored; set this to `torsten`. +* The `prefix` is the prefix of the objects in the bucket; set this to an empty string because you want all the objects in that bucket. +* The `endpoint_url` is the URL of the S3 endpoint; set that to `https://s3.us-south.cloud-object-storage.appdomain.cloud` to access the public example bucket. This gives the following SQL command: ```sql -SELECT pgai.create_s3_retriever( +SELECT aidb.create_s3_retriever( 'image_embeddings', -- Name of the similarity retrieval setup 'public', -- Schema of the source table 'clip-vit-base-patch32', -- Embeddings encoder model for similarity data @@ -53,10 +53,10 @@ __OUTPUT__ ### Refreshing the retriever -Next, run the `pgai.refresh_retriever` function. +Next, run the `aidb.refresh_retriever` function. ```sql -SELECT pgai.refresh_retriever('image_embeddings'); +SELECT aidb.refresh_retriever('image_embeddings'); __OUTPUT__ refresh_retriever ------------------- @@ -66,13 +66,28 @@ __OUTPUT__ ### Retrieving data -Finally, run the `pgai.retrieve_via_s3` function with the required parameters to retrieve the top K most relevant (most similar) AI data items. Be aware that the object type is currently limited to image and text files. +Finally, run the `aidb.retrieve_via_s3` function with the required parameters to retrieve the top K most relevant (most similar) AI data items. Be aware that the object type is currently limited to image and text files. The syntax for `aidb.retrieve_via_s3` is: -```sql -Finally, run the `pgai.retrieve_via_s3` function with the required parameters to retrieve the top K most relevant (most similar) AI data items. Be aware that the object type is currently limited to image and text files. +```sql +aidb.retrieve_via_s3( + retriever_name text, + topk integer, + bucket text, + object text, + s3_endpoint text) +``` + +* The `retriever_name` is used to identify and reference the retriever; set it to `image_embeddings` for this example. +* The `topk` is the number of most relevant data items to retrieve; set this to 1. +* The `bucket` is the name of the S3 bucket where the data is stored. +* The `object` is the name of the object in the bucket. +* The `endpoint_url` is the URL of the S3 endpoint. + + +Run the `aidb.retrieve_via_s3` function with the required parameters to retrieve the top K most relevant (most similar) AI data items. Be aware that the object type is currently limited to image and text files. ```sql -SELECT data from pgai.retrieve_via_s3( +SELECT data from aidb.retrieve_via_s3( 'image_embeddings', -- retriever's name 1, -- top K 'torsten', -- S3 bucket name diff --git a/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/working-with-ai-data-in-postgres.mdx b/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/working-with-ai-data-in-postgres.mdx index b67c93cbc52..288c2eb0d7d 100644 --- a/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/working-with-ai-data-in-postgres.mdx +++ b/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/working-with-ai-data-in-postgres.mdx @@ -1,7 +1,7 @@ --- title: Working with AI data stored in Postgres tables navTitle: Working with AI data in Postgres -description: How to work with AI data stored in Postgres tables using the pgai extension. +description: How to work with AI data stored in Postgres tables using the aidb extension. --- The examples on this page are about working with AI data stored in columns in the Postgres table. @@ -23,10 +23,10 @@ CREATE TABLE ## Working with auto embedding -Next, you are going to create a retriever with the just created products table as the source using the `pgai.create_pg_retriever` function which has this syntax: +Next, you are going to create a retriever with the just created products table as the source using the `aidb.create_pg_retriever` function which has this syntax: ```sql -pgai.create_pg_retriever( +aidb.create_pg_retriever( retriever_name text, schema_name text, primary_key text, @@ -50,7 +50,7 @@ pgai.create_pg_retriever( This gives the following SQL command: ```sql -SELECT pgai.create_pg_retriever( +SELECT aidb.create_pg_retriever( 'product_embeddings_auto', -- Retriever name 'public', -- Schema 'product_id', -- Primary key @@ -85,10 +85,10 @@ __OUTPUT__ INSERT 0 9 ``` -Now you can use the retriever, by specifying the retriever name, to perform a similarity retrieval of the top K most relevant, in this case most similar, AI data items. You can do this by running the `pgai.retrieve` function with the required parameters: +Now you can use the retriever, by specifying the retriever name, to perform a similarity retrieval of the top K most relevant, in this case most similar, AI data items. You can do this by running the `aidb.retrieve` function with the required parameters: ```sql -pgai.retrieve( +aidb.retrieve( query text, top_k integer, retriever_name text @@ -102,17 +102,17 @@ pgai.retrieve( This gives the following SQL command: ```sql -SELECT data FROM pgai.retrieve( +SELECT data FROM aidb.retrieve( 'I like it', -- The query text to retrieve the top similar data 5, -- top K 'product_embeddings_auto' -- retriever's name ); __OUTPUT__ - data -------------------------------------- + data +-------------------------------------- {'data': 'Hamburger - Tasty'} {'data': 'Cheesburger - Very tasty'} - {'data': 'Fries - Dunno'} + {'data': 'Pizza - Mkay'} {'data': 'Sandwich - So what'} {'data': 'Kebab - Maybe'} (5 rows) @@ -123,7 +123,7 @@ __OUTPUT__ You can now create a retriever without auto embedding. This means that the application has control over when the embeddings computation occurs. It also means that the computation is a bulk operation. For demonstration you can simply create a second retriever for the same products table that you just previously created the first retriever for, but setting `auto_embedding` to false. ```sql -SELECT pgai.create_pg_retriever( +SELECT aidb.create_pg_retriever( 'product_embeddings_bulk', -- Retriever name 'public', -- Schema 'product_id', -- Primary key @@ -140,10 +140,10 @@ __OUTPUT__ (1 row) ``` -The AI records are already in the table though. As this second retriever is newly created, it won't have created any embeddings. Running `pgai.retrieve` using the retriever now doesn't return any results: +The AI records are already in the table though. As this second retriever is newly created, it won't have created any embeddings. Running `aidb.retrieve` using the retriever now doesn't return any results: ```sql -SELECT data FROM pgai.retrieve( +SELECT data FROM aidb.retrieve( 'I like it', -- The query text to retrieve the top similar data 5, -- top K 'product_embeddings_bulk' -- retriever's name @@ -154,10 +154,10 @@ __OUTPUT__ (0 rows) ``` -You need to run a bulk generation of embeddings before performing any retrieval. You can do this using the `pgai.refresh_retriever` function: +You need to run a bulk generation of embeddings before performing any retrieval. You can do this using the `aidb.refresh_retriever` function: ``` -pgai.refresh_retriever( +aidb.refresh_retriever( retriever_name text ) ``` @@ -165,11 +165,11 @@ pgai.refresh_retriever( The `retriever_name` is the name of the retriever. Our retriever's name is `product_embeddings_bulk`. So the SQL command is: ```sql -SELECT pgai.refresh_retriever( +SELECT aidb.refresh_retriever( 'product_embeddings_bulk' -- name of the retriever ); __OUTPUT__ -INFO: inserted table name public._pgai_embeddings_product_embeddings_bulk +INFO: inserted table name public._aidb_embeddings_product_embeddings_bulk refresh_retriever ------------------- @@ -179,17 +179,17 @@ INFO: inserted table name public._pgai_embeddings_product_embeddings_bulk You can now run that retrieve operation using the second retriever and get the same results as with the first retriever: ```sql -SELECT data FROM pgai.retrieve( +SELECT data FROM aidb.retrieve( 'I like it', -- The query text to retrieve the top similar data 5, -- top K 'product_embeddings_bulk' -- retriever's name ); __OUTPUT__ - data -------------------------------------- + data +-------------------------------------- {'data': 'Hamburger - Tasty'} {'data': 'Cheesburger - Very tasty'} - {'data': 'Fries - Dunno'} + {'data': 'Pizza - Mkay'} {'data': 'Sandwich - So what'} {'data': 'Kebab - Maybe'} (5 rows) @@ -208,68 +208,68 @@ INSERT 0 2 The new data is automatically picked up in the retrieval from the first retriever with auto embeddings: ```sql -SELECT data FROM pgai.retrieve( +SELECT data FROM aidb.retrieve( 'I like it', -- The query text to retrieve the top similar data 5, -- top K 'product_embeddings_auto' -- retriever's name ); __OUTPUT__ - data + data -------------------------------------- {'data': 'Hamburger - Tasty'} {'data': 'Cheesburger - Very tasty'} + {'data': 'Pizza - Mkay'} {'data': 'Sandwich - So what'} - {'data': 'Kebab - Maybe'} {'data': 'Ramen - Delicious'} (5 rows) ``` -The second retriever without auto embedding doesn't reflect the new data. It can only do so when once there has been another explicit call to `pgai.refresh_retriever`. Until then, the results don't change: +The second retriever without auto embedding doesn't reflect the new data. It can only do so when once there has been another explicit call to `aidb.refresh_retriever`. Until then, the results don't change: ```sql -SELECT data FROM pgai.retrieve( +SELECT data FROM aidb.retrieve( 'I like it', -- The query text to retrieve the top similar data 5, -- top K 'product_embeddings_bulk' -- retriever's name ); __OUTPUT__ - data -------------------------------------- + data +-------------------------------------- {'data': 'Hamburger - Tasty'} {'data': 'Cheesburger - Very tasty'} - {'data': 'Fries - Dunno'} + {'data': 'Pizza - Mkay'} {'data': 'Sandwich - So what'} {'data': 'Kebab - Maybe'} (5 rows) ``` -If you now call `pgai.refresh_retriever()` again, the embeddings computation uses the new data to refresh the embeddings: +If you now call `aidb.refresh_retriever()` again, the embeddings computation uses the new data to refresh the embeddings: ```sql -SELECT pgai.refresh_retriever( +SELECT aidb.refresh_retriever( 'product_embeddings_bulk' -- name of the retriever ); __OUTPUT__ -INFO: inserted table name public._pgai_embeddings_product_embeddings_bulk +INFO: inserted table name public._aidb_embeddings_product_embeddings_bulk refresh_retriever ------------------- ``` -And the new data shows up in the results of the query when you call the `pgai.retrieve` function again: +And the new data shows up in the results of the query when you call the `aidb.retrieve` function again: ```sql -SELECT data FROM pgai.retrieve( +SELECT data FROM aidb.retrieve( 'I like it', -- The query text to retrieve the top similar data 5, -- top K 'product_embeddings_bulk' -- retriever's name ); __OUTPUT__ - data + data -------------------------------------- {'data': 'Hamburger - Tasty'} {'data': 'Cheesburger - Very tasty'} + {'data': 'Pizza - Mkay'} {'data': 'Sandwich - So what'} - {'data': 'Kebab - Maybe'} {'data': 'Ramen - Delicious'} (5 rows) ``` @@ -278,4 +278,4 @@ You used the two different retrievers for the same source data just to demonstra In practice you may want to combine auto embedding and refresh_retriever() in a single retriever to conduct an initial embedding of data that existed before you created the retriever and then rely on auto embedding for any future data that's ingested, updated, or deleted. -You should consider relying on `pgai.refresh_retriever`, and not using auto embedding, if you typically ingest a lot of AI data at once as a batch. +You should consider relying on `aidb.refresh_retriever`, and not using auto embedding, if you typically ingest a lot of AI data at once as a batch. diff --git a/advocacy_docs/edb-postgres-ai/overview/guide-and-getting-started.mdx b/advocacy_docs/edb-postgres-ai/overview/guide-and-getting-started.mdx index 95d88aebbf6..bd66d0de8ae 100644 --- a/advocacy_docs/edb-postgres-ai/overview/guide-and-getting-started.mdx +++ b/advocacy_docs/edb-postgres-ai/overview/guide-and-getting-started.mdx @@ -10,7 +10,7 @@ You'll want to look at the [EDB Postgres® AI Analytics](/edb-postgres-ai/analyt ## Are you looking at running machine learning models on your Postgres data? -You'll want to look at the [EDB Postgres® AI Machine Learning](/edb-postgres-ai/ai-ml) documentation, which covers the technical preview of the pgai extension. +You'll want to look at the [EDB Postgres® AI Machine Learning](/edb-postgres-ai/ai-ml) documentation, which covers the technical preview of the aidb extension. ## Do you need to migrate your data to Postgres? diff --git a/advocacy_docs/edb-postgres-ai/overview/latest-release-news.mdx b/advocacy_docs/edb-postgres-ai/overview/latest-release-news.mdx index 34fa7cdab9c..88108583663 100644 --- a/advocacy_docs/edb-postgres-ai/overview/latest-release-news.mdx +++ b/advocacy_docs/edb-postgres-ai/overview/latest-release-news.mdx @@ -39,9 +39,9 @@ Delta Lake protocol. Customers can sync tables from transactional sources (initially, EDB Postgres AI Cloud Service databases) into Lakehouse tables in managed storage locations (initially, S3 object storage buckets). -### Technical preview of [EDB pgai extension](/edb-postgres-ai/ai-ml) +### Technical preview of [EDB aidb extension](/edb-postgres-ai/ai-ml) -Customers can now access a technical preview of the new EDB pgai extension, +Customers can now access a technical preview of the new EDB aidb extension, which seamlessly integrates and manages AI data for enterprise workloads with EDB Postgres AI, to help understand your AI data directly out of the box. Built on top of Postgres vector data support, this tech preview enables Postgres to @@ -49,7 +49,7 @@ run LLMs and directly manage, process, search and retrieve AI data such as text documents or images to accelerate AI application development and operationalization across your company. -In this technical preview, you'll have the opportunity to explore the pgai extension +In this technical preview, you'll have the opportunity to explore the aidb extension and build AI-infused similarity search applications — for instance, a Retrieval-Augmented Generation (RAG) application using Postgres. RAG applications utilize a powerful combination of retrieval systems and language diff --git a/advocacy_docs/edb-postgres-ai/overview/overview-and-concepts.mdx b/advocacy_docs/edb-postgres-ai/overview/overview-and-concepts.mdx index 5801bc7e378..03cb0e02190 100644 --- a/advocacy_docs/edb-postgres-ai/overview/overview-and-concepts.mdx +++ b/advocacy_docs/edb-postgres-ai/overview/overview-and-concepts.mdx @@ -42,7 +42,7 @@ Filtering out the data noise and revealing insights and value, Lakehouse analyti * At the heart of Analytics is custom-built object storage for your data. Built to bring structured and unstructured data together, Lakehouse nodes support numerous formats to bring cold data in, ready for analysis. ## [EDB Postgres AI AI/ML](/edb-postgres-ai/ai-ml) -* Postgres has proven its capability as a flexible data environment. With vector data as the core of generative AI, ir's already infused into EDB Postgres AI, providing a platform for a range of practical and effective AI/ML solutions. A technical preview of this capability is available for the Postgres pgai extension. +* Postgres has proven its capability as a flexible data environment. With vector data as the core of generative AI, it's already infused into EDB Postgres AI, providing a platform for a range of practical and effective AI/ML solutions. A technical preview of this capability is available for the Postgres aidb extension. ## [EDB Postgres AI Platforms and tools](/edb-postgres-ai/tools) * Postgres extensions are a source of its power and popularity, and are one of the categories that fall within this element of EDB Postgres AI. From 16ef7dc6968cd0e3b20c28a46fb7748f21c5672a Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Tue, 30 Jul 2024 09:28:30 +0100 Subject: [PATCH 09/15] Fix front page index Signed-off-by: Dj Walker-Morgan --- src/pages/index.js | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/pages/index.js b/src/pages/index.js index 1cc5a0858d2..daa7f6cda9d 100644 --- a/src/pages/index.js +++ b/src/pages/index.js @@ -244,7 +244,7 @@ const Page = () => { to="/edb-postgres-ai/ai-ml" > - Overview of pgai + Overview of aidb Install the Tech Preview From 86d63cc050c16012f2c427216c76f85595c525bd Mon Sep 17 00:00:00 2001 From: Josh Heyer Date: Thu, 18 Jul 2024 04:30:28 +0000 Subject: [PATCH 10/15] tooling: rewrite links based on git renames check links and annotate --- .github/workflows/check-links.yml | 22 + .vscode/launch.json | 26 + .../automation/actions/link-check/action.yml | 10 + tools/automation/actions/link-check/index.js | 489 ++++ .../link-check/lib/mdast-embedded-hast.mjs | 195 ++ .../actions/link-check/package-lock.json | 1978 +++++++++++++++++ .../actions/link-check/package.json | 36 + tools/user/reorg/links/README.md | 7 + .../reorg/links/lib/mdast-embedded-hast.mjs | 195 ++ tools/user/reorg/links/package-lock.json | 1713 ++++++++++++++ tools/user/reorg/links/package.json | 35 + .../reorg/links/update-links-to-renames.js | 298 +++ tools/user/reorg/redirects/package-lock.json | 14 +- tools/user/reorg/redirects/package.json | 3 + 14 files changed, 5014 insertions(+), 7 deletions(-) create mode 100644 .github/workflows/check-links.yml create mode 100644 .vscode/launch.json create mode 100644 tools/automation/actions/link-check/action.yml create mode 100644 tools/automation/actions/link-check/index.js create mode 100644 tools/automation/actions/link-check/lib/mdast-embedded-hast.mjs create mode 100644 tools/automation/actions/link-check/package-lock.json create mode 100644 tools/automation/actions/link-check/package.json create mode 100644 tools/user/reorg/links/README.md create mode 100644 tools/user/reorg/links/lib/mdast-embedded-hast.mjs create mode 100644 tools/user/reorg/links/package-lock.json create mode 100644 tools/user/reorg/links/package.json create mode 100644 tools/user/reorg/links/update-links-to-renames.js diff --git a/.github/workflows/check-links.yml b/.github/workflows/check-links.yml new file mode 100644 index 00000000000..c099a184ae5 --- /dev/null +++ b/.github/workflows/check-links.yml @@ -0,0 +1,22 @@ +name: check links on PR +on: + pull_request: + types: [opened, synchronize] +jobs: + check-links: + runs-on: ubuntu-latest + steps: + - name: Checkout repo + uses: actions/checkout@v4 + with: + lfs: true + ref: ${{ github.event.pull_request.head.sha }} + + - name: setup node + uses: actions/setup-node@v4 + + - name: install dependencies + run: npm --prefix ./tools/automation/actions/link-check ci + + - name: check links + uses: ./tools/automation/actions/link-check diff --git a/.vscode/launch.json b/.vscode/launch.json new file mode 100644 index 00000000000..e4b1fb29ca2 --- /dev/null +++ b/.vscode/launch.json @@ -0,0 +1,26 @@ +{ + // Use IntelliSense to learn about possible attributes. + // Hover to view descriptions of existing attributes. + // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 + "version": "0.2.0", + "configurations": [ + { + "type": "node", + "request": "launch", + "name": "Launch update links to renames", + "skipFiles": [ + "/**" + ], + "program": "${workspaceFolder}/tools/user/reorg/links/update-links-to-renames.js" + }, + { + "type": "node", + "request": "launch", + "name": "Launch link-check", + "skipFiles": [ + "/**" + ], + "program": "${workspaceFolder}/tools/automation/actions/link-check/index.js" + } + ] +} \ No newline at end of file diff --git a/tools/automation/actions/link-check/action.yml b/tools/automation/actions/link-check/action.yml new file mode 100644 index 00000000000..7d8c3526cad --- /dev/null +++ b/tools/automation/actions/link-check/action.yml @@ -0,0 +1,10 @@ +name: 'Check links and redirects' +description: 'Checks local link paths and redirects; rewrites links that would go through a local redirect' +runs: + using: 'node20' + main: 'index.js' +inputs: + update-links: + description: "Set to true/1 causes the action to update links where possible to avoid redirects" + required: false + diff --git a/tools/automation/actions/link-check/index.js b/tools/automation/actions/link-check/index.js new file mode 100644 index 00000000000..04e40eae71f --- /dev/null +++ b/tools/automation/actions/link-check/index.js @@ -0,0 +1,489 @@ +import core, { summary } from "@actions/core"; +import github from "@actions/github"; +import yaml from "js-yaml"; +import path from "path"; +import remarkParse from "remark-parse"; +import mdx from "remark-mdx"; +import unified from "unified"; +import remarkFrontmatter from "remark-frontmatter"; +import remarkStringify from "remark-stringify"; +import admonitions from "remark-admonitions"; +import glob from "fast-glob"; +import { visitParents } from "unist-util-visit-parents"; +import remarkMdxEmbeddedHast from "./lib/mdast-embedded-hast.mjs"; +import mdast2string from "mdast-util-to-string"; +import GithubSlugger from "github-slugger"; +import toVfile from "to-vfile"; +const { read, write } = toVfile; + +const docsUrl = "https://www.enterprisedb.com/docs"; +// add path here to ignore link warnings +const noWarnPaths = [ + "/playground/1/01_examples/link-tests", + "/playground/1/01_examples/link-test", +]; +const basePath = path.resolve( + path.dirname(new URL(import.meta.url).pathname), + "../../../..", +); + +let ghCore = core; + +if (!process.env.GITHUB_REF) { + ghCore = { + getInput: (key) => undefined, + summary: { + addRaw: (markup) => { + console.log(markup); + }, + write: () => {}, + stringify: () => {}, + }, + setFailed: (message) => { + console.error(message); + }, + error: (message, properties) => { + console.error( + "⚠️⚠️ " + + formatErrorPath( + properties.file, + properties.startLine, + properties.startColumn, + ) + + "\n\t" + + message, + ); + }, + warning: (message, properties) => { + console.warn( + "⚠️ " + + formatErrorPath( + properties.file, + properties.startLine, + properties.startColumn, + ) + + "\n\t" + + message, + ); + }, + notice: (message, properties) => { + console.log( + formatErrorPath( + properties.file, + properties.startLine, + properties.startColumn, + ) + + "\n\t" + + message, + ); + }, + }; + + function formatErrorPath(filePath, line, column) { + return `${path.relative(basePath, filePath)}:${line}:${column}`; + } +} + +main().catch((err) => ghCore.setFailed(err)); + +async function main() { + const updateLinks = process.env.GITHUB_REF + ? !!ghCore.getInput("update-links") + : true; + const sourceFiles = await glob([ + path.resolve(basePath, "product_docs/**/*.mdx"), + path.resolve(basePath, "advocacy_docs/**/*.mdx"), + ]); + + const allValidUrlPaths = new Map(); + + const pipeline = unified() + .use(remarkParse) + .use(remarkStringify, { emphasis: "*", bullet: "-", fences: true }) + .use(remarkMdxEmbeddedHast) + .use(admonitions, { + tag: "!!!", + icons: "none", + infima: true, + customTypes: { + seealso: "note", + hint: "tip", + interactive: "interactive", + }, + }) + .use(mdx) + .use(remarkFrontmatter) + .freeze(); + + // first pass: scan all source files to identify valid URLs, mapping redirects to canonical path + console.log( + `Scanning ${sourceFiles.length} pages for redirects and link targets`, + ); + + const scanner = pipeline().use(index); + + for (const sourcePath of sourceFiles) { + const metadata = { + canonical: fsPathToURLPath(sourcePath), + index: isIndex(sourcePath), + slugs: [], + redirects: [], + source: sourcePath, + }; + allValidUrlPaths.set(metadata.canonical, metadata); + if (isVersioned(sourcePath)) { + const splitPath = metadata.canonical.split(path.posix.sep); + metadata.product = splitPath[1]; + metadata.version = splitPath[2]; + allValidUrlPaths.set(latestVersionURLPath(sourcePath), metadata); + } + const input = await read(sourcePath); + input.data = { allValidUrlPaths, metadata }; + const ast = scanner.parse(input); + await scanner.run(ast, input); + } + + // compile product versions + const productVersions = {}; + + for (let [, metadata] of allValidUrlPaths) { + if (!metadata.product) continue; + + const list = (productVersions[metadata.product] = + productVersions[metadata.product] || []); + if (!list.includes(metadata.version)) list.push(metadata.version); + } + + for (const product in productVersions) { + productVersions[product] = productVersions[product].sort((a, b) => + b.localeCompare(a, undefined, { numeric: true }), + ); + } + + // handle product versions: update "latest" paths to point to the highest version number, where available + for (let [urlPath, metadata] of allValidUrlPaths) { + if (!metadata.version) continue; + + const splitPath = urlPath.split(path.posix.sep); + if (splitPath[2] !== "latest" || splitPath[1] !== metadata.product) + continue; + + // all versions for this path. + // Null entries for versions that don't exist. + // Last version is the first non-null in the list, e.g. pathVersions.filter((p) => !!p)[0] + // If first entry in list is null, there is no "latest" path (and such paths in links should be rewritten). + const allPaths = [urlPath, ...metadata.redirects]; + const pathVersions = productVersions[metadata.product].map((v) => { + const versionPaths = allPaths.map((p) => replacePathVersion(p, v)); + for (let vp of versionPaths) { + const match = allValidUrlPaths.get(vp); + if (match) return match; + } + return null; + }); + const latestMetadata = pathVersions[0]; // may be null - in which case there is no "latest", just ... last + const lastMetadata = pathVersions.find((p) => !!p); + if (!lastMetadata) debugger; + + lastMetadata.latest = latestMetadata === lastMetadata; + if (lastMetadata !== metadata) allValidUrlPaths.set(urlPath, lastMetadata); + } + + // second pass: rewrite links in source files to point to canonical path, report errors + + const processor = pipeline().use(cleanup); + + console.log( + `Cross-referencing pages with ${allValidUrlPaths.size} valid URL paths`, + ); + + let filesUpdated = 0, + linksChecked = 0, + linksUpdated = 0, + brokenPaths = 0, + brokenSlugs = 0; + for (const sourcePath of sourceFiles) { + const urlPath = fsPathToURLPath(sourcePath); + const metadata = allValidUrlPaths.get(urlPath); + const input = await read(sourcePath); + input.data = { metadata, allValidUrlPaths }; + let result = await processor.process(input); // should normally return input + linksChecked += metadata.linksChecked || 0; + for (let message of result.messages) { + const props = { + title: message.ruleId, + file: message.file, + startLine: message.line, + startColumn: message.column, + }; + // don't use fatal messages in vFile, as they are noisy in the console. + // DO use errors for pathCheck rules, as that doubles the number of annotations GitHub will show + if (message.fatal || message.ruleId === "pathCheck") + ghCore.error(message.reason, props); + else if (message.fatal === false) ghCore.warning(message.reason, props); + else if (message.ruleId !== "urlPathRewrite" || updateLinks) + ghCore.notice(message.reason, props); + if (message.ruleId === "pathCheck") ++brokenPaths; + else if (message.ruleId === "slugCheck") ++brokenSlugs; + } + linksUpdated += metadata.linksUpdated || 0; + if (metadata.linksUpdated && updateLinks) { + await write(result); + ++filesUpdated; + } + } + + ghCore.summary.addRaw(`## Docs internal link-checker + +Links checked: **${linksChecked}** + +- **${brokenPaths}** bad paths +- **${brokenSlugs}** bad slugs`); + + if (updateLinks) + ghCore.summary.addRaw(` + +Links corrected: **${linksUpdated}** +Files updated: **${filesUpdated}**`); + else if (linksUpdated) + ghCore.summary.addRaw(` + +**${linksUpdated}** links could be updated to avoid redirects; +run \`node tools/automation/actions/link-check\` locally.`); + + ghCore.summary.write(); + + if (brokenPaths > 0) + ghCore.setFailed(`Broken links found; please fix before publishing!`); +} + +function index() { + // grab and store: + // - each redirect (normalized) + // - each link target (slugs, id'd elements) + return (tree, file) => { + const { allValidUrlPaths, metadata } = file.data; + const slugger = new GithubSlugger(); + + visitParents(tree, ["element", "heading", "yaml"], (node) => { + if (node.type === "element" && node.properties.id) + metadata.slugs.push(node.properties.id); + else if (node.type === "heading") { + metadata.slugs.push(slugger.slug(mdast2string(node))); + } else if (node.type === "yaml") { + const frontmatter = yaml.load(node.value); + for (let redirect of normalizeRedirects(frontmatter, metadata)) { + metadata.redirects.push(redirect); + allValidUrlPaths.set(redirect, metadata); + } + } + }); + }; + + function normalizeRedirects(frontmatter, metadata) { + if (!frontmatter.redirects?.length) return []; + return frontmatter.redirects.flatMap((redirect) => { + let urlPath = path.posix.resolve( + path.posix.sep, + metadata.canonical, + redirect, + ); + if (metadata.version) { + const splitPath = urlPath.split(path.posix.sep); + if (metadata.product === splitPath[1]) { + const versioned = path.posix.join( + path.posix.sep, + metadata.product, + metadata.version, + ...splitPath.slice(3), + ); + const unversioned = path.posix.join( + path.posix.sep, + metadata.product, + "latest", + ...splitPath.slice(3), + ); + return [versioned, unversioned]; + } + } + return urlPath; + }); + } +} + +function cleanup() { + // identify each link: + // - check for valid path + // - check for valid slug + // - if path and slug are valid but path is redirect, update path + return (tree, file) => { + const { allValidUrlPaths, metadata } = file.data; + + const relativize = ({ path: relative, latest }) => { + // if path is identical to current: strip all but hash + if (relative === metadata.canonical) return ""; + + const currentDirname = metadata.index + ? metadata.canonical + : path.posix.dirname(metadata.canonical); + // if dirname is identical to current: strip all but filename and hash + // if dirname contains current dirname: relative path + hash + if (path.posix.dirname(relative).startsWith(currentDirname)) + relative = path.posix.relative(currentDirname, relative); + // if versioned and pointing to latest, use "latest" path + else if (latest) relative = replacePathVersion(relative); + // otherwise: full path + return relative; + }; + + const mapUrlToCanonical = (url, position) => { + let test = normalizeUrl(url, metadata.canonical, metadata.index); + if (!test.href.startsWith(docsUrl)) return url; + if (test.href === docsUrl) return url; + if (path.posix.extname(test.pathname)) return url; + + metadata.linksChecked = metadata.linksChecked || 0 + 1; + + // check valid path (may be a redirect, don't care yet) + let testPath = test.pathname + .replace(/^\/docs/, "") + .replace(/\/$/, "") + .trim(); + if (testPath.length && !allValidUrlPaths.has(testPath)) { + if (!noWarnPaths.includes(metadata.canonical)) + file.message( + `invalid URL path: ${url}` + + (url !== testPath + "/" + test.hash ? ` (${testPath})` : ""), + position, + "link-check:pathCheck", + ); + return url; + } + + let destMetadata = testPath.length + ? allValidUrlPaths.get(testPath) + : metadata; + + // check if path needs to be remapped. Must be: + // - not the canonical URL, and + // - not the "latest" version of canonical for a latest destination + // When remapping, if destination is the last version then use "latest" path + if ( + testPath !== destMetadata.canonical && + !( + destMetadata.latest && + testPath === replacePathVersion(destMetadata.canonical) + ) + ) { + // check for latest / non-latest mismatch: that's a link in an older version using a "latest" + // path in a link. That might be intentional, but if we're hitting a redirect there's a good chance + // the intent was to link to a page in the older version, back when it was current + if ( + !metadata.latest && + destMetadata.latest && + metadata.product === destMetadata.product + ) { + const olderDest = allValidUrlPaths.get( + replacePathVersion(testPath, metadata.version), + ); + if (olderDest) destMetadata = olderDest; + } + + const newPath = + relativize({ + path: destMetadata.canonical, + latest: destMetadata.latest, + }) + + "/" + + test.hash; + + metadata.linksUpdated = metadata.linksUpdated || 0 + 1; + file.info( + `Update link path ${url} to ${newPath}`, + position, + "link-check:urlPathRewrite", + ); + url = newPath; + } + + // check valid slug + if ( + test.hash && + !destMetadata.slugs.some((s) => s === test.hash.slice(1)) + ) { + if (!noWarnPaths.includes(metadata.canonical)) + file.message( + `cannot find slug for ${test.hash} in ${path.relative(basePath, destMetadata.source)}`, + position, + "link-check:slugCheck", + ); + } + + return url; + }; + + visitParents(tree, ["link", "element"], (node) => { + try { + if ( + node.type === "element" && + node.tagName === "a" && + node.properties.href + ) + node.properties.href = mapUrlToCanonical( + node.properties.href, + node.position, + ); + else if (node.type === "link") + node.url = mapUrlToCanonical(node.url, node.position); + } catch (e) { + file.message(e, node.position); + } + }); + }; +} + +function normalizeUrl(url, pagePath, index) { + let dest = new URL(url, "local:" + pagePath + (index ? "/" : "")); + if (dest.protocol === "local:" && dest.host === "") + dest = new URL( + docsUrl + + dest.pathname.replace(/\/index\.mdx?$|\.mdx?$/, "").replace(/\/$/, "") + + dest.hash, + ); + return dest; +} + +function isIndex(fsPath) { + return /\/index\.mdx?$/.test(fsPath); +} + +function isVersioned(fsPath) { + return fsPath.includes("product_docs"); +} + +function replacePathVersion(urlPath, version = "latest") { + const splitPath = urlPath.split(path.posix.sep); + return path.posix.join( + path.posix.sep, + splitPath[1], + version, + ...splitPath.slice(3), + ); +} + +function fsPathToURLPath(fsPath) { + // 1. strip leading product_docs/docs and advocacy_docs + // 2. strip trailing index.mdx + // 3. strip trailing .mdx + // 4. strip trailing / + const docsLocations = /product_docs\/docs|advocacy_docs/; + return fsPath + .split(docsLocations)[1] + .replace(/\/index\.mdx$|\.mdx$/, "") + .replace(/\/$/, ""); +} + +function latestVersionURLPath(fsPath) { + const splitPath = fsPathToURLPath(fsPath).split("/"); + return path.posix.join("/", splitPath[1], "latest", ...splitPath.slice(3)); +} diff --git a/tools/automation/actions/link-check/lib/mdast-embedded-hast.mjs b/tools/automation/actions/link-check/lib/mdast-embedded-hast.mjs new file mode 100644 index 00000000000..67b7100d875 --- /dev/null +++ b/tools/automation/actions/link-check/lib/mdast-embedded-hast.mjs @@ -0,0 +1,195 @@ +// +// This is a collection of dirty hacks to make working with HTML embedded in Markdown a bit easier +// ...consider yourself warned +// + +import unified from "unified"; +import visit from "unist-util-visit"; +import rehypeParse from "rehype-parse"; +import hast2html from "hast-util-to-html"; +import { htmlVoidElements } from "html-void-elements"; + +export default function remarkMdxEmbeddedHast() { + const compiler = this.Compiler; + if (compiler && compiler.prototype && compiler.prototype.visitors) + attachCompiler(compiler); + return transformer; + + function transformer(tree, file) { + visit(tree, "jsx", visitor); + + function visitor(node, index, parent) { + if (/^\s* 1) return true; + // For a single child, check the position of the closing tag; if that doesn't exist, + // we can't handle it unless this specific sort of element doesn't need one + return ( + root.children[0].data?.position?.closing || + htmlVoidElements.includes(root.children[0].tagName?.toLowerCase()) + ); + } + + // ok, the other scenario that's useful to handle here is a mixture of HTML + // and Markdown. This can be inline (the 3rd man) or block content - + // AKA, a lone opening tag, hopefully with a closing tag later on + // if self-closing or a known-void, ignore for now to avoid stepping on JSX + function isOpeningTag(root, sourceHtml) { + // an opening tag has one child with no closing tag + if (root.children?.length !== 1) return false; + // gotta actually *be* an element + if (root.children[0].type !== "element") return false; + // isn't self-closing (this test may need work) + if (/<[^>]+\/>/.test(sourceHtml)) return false; + // and isn't a tag that doesn't need to close in HTML (which will probably break JSX, tbf) + if (htmlVoidElements.includes(root.children[0].tagName?.toLowerCase())) + return false; + + // of course, also shouldn't have children, and shouldn't have a closing + return ( + !root.children[0]?.children?.length && + !root.children[0]?.data?.position?.closing + ); + } + + function captureToEnd(node, index, parent, hast) { + const tagName = hast.children[0].tagName; + const valueToMatch = ``; + + let endIndex = index + 1; + while ( + endIndex < parent.children.length && + parent.children[endIndex].value !== valueToMatch + ) { + if (parent.children[endIndex].type === "jsx") + visitor(parent.children[endIndex], endIndex, parent); + ++endIndex; + } + + const end = parent.children.splice(endIndex, 1)[0]; + if (!end) return null; + + let replacement = { + type: "jsx-hast-embedded-mdast", + children: hast.children, + // this may be a bit too simplistic + block: node.position.end.line !== end.position.start.line, + }; + + replacement.children[0].children = parent.children.splice( + index + 1, + endIndex - index - 1, + ); + + return replacement; + } + } + + // rewire stringify to work with the crazy crap we did above + // this will ALL need to be changed if we upgrade to 9.0.0+ + function attachCompiler(compiler) { + const proto = compiler.prototype; + const opts = { + allowDangerousHtml: true, + allowDangerousCharacters: true, + closeSelfClosing: true, + entities: { useNamedReferences: true }, + }; + + proto.visitors = Object.assign({}, proto.visitors, { + "jsx-hast": hast, + "jsx-hast-embedded-mdast": hastMdast, + }); + + function hast(node) { + // if nothing was parsed out, there's no point in trying to recreate it; just use what was there + if (!node.children) { + return (node.value || "").trim(); + } + + var newHtml = node.children.map((n) => hast2html(n, opts)).join(""); + var hastCompHtml = unified() + .use(rehypeParse, { + emitParseErrors: true, + verbose: true, + fragment: true, + }) + .parse(node.value || "") + .children.map((n) => hast2html(n, opts)) + .join(""); + + // if logically unchanged, write the original: too easy to screw this up otherwise + if (newHtml === hastCompHtml) return (node.value || "").trim(); + + // this really only works for html right now, so escape stuff that would be interpreted as jsx + newHtml = newHtml.replace(/[{]/g, "{"); + + return newHtml; + } + + function hastMdast(node) { + let content = ""; + + if (node.block) { + content = this.block(node.children[0]).replace(/^\n*|\n*$/g, ""); + if (content.length) content = "\n" + content + "\n"; + content = "\n" + content + "\n"; + } else { + content = this.all(node.children[0]).join(""); + } + + const endTag = ``; + const mdastChildren = node.children[0].children; + + node.children[0].children = []; + let container = hast2html(node.children[0], opts); + node.children[0].children = mdastChildren; + + return container.replace(endTag, content + endTag); + } + } + + function offsetPosition(node, offsetPoint) + { + visit(node, (child) => { + if (!child.position) return; + if (child.position.start?.line) child.position.start.line += offsetPoint.line - 1; + if (child.position.start?.column) child.position.start.column += offsetPoint.column - 1; + if (child.position.start?.offset) child.position.start.offset += offsetPoint.offset; + if (child.position.end?.line) child.position.end.line += offsetPoint.line - 1; + if (child.position.end?.column) child.position.end.column += offsetPoint.column - 1; + if (child.position.end?.offset) child.position.end.offset += offsetPoint.offset; + }); + } +} diff --git a/tools/automation/actions/link-check/package-lock.json b/tools/automation/actions/link-check/package-lock.json new file mode 100644 index 00000000000..fb516252e98 --- /dev/null +++ b/tools/automation/actions/link-check/package-lock.json @@ -0,0 +1,1978 @@ +{ + "name": "docs-link-check", + "version": "1.0.0", + "lockfileVersion": 3, + "requires": true, + "packages": { + "": { + "name": "docs-link-check", + "version": "1.0.0", + "dependencies": { + "@actions/core": "^1.10.1", + "@actions/github": "^6.0.0", + "fast-glob": "^3.2.12", + "github-slugger": "^1.5.0", + "hast-util-to-html": "^7.1.3", + "html-void-elements": "^2.0.1", + "is-absolute-url": "^3.0.3", + "js-yaml": "^4.1.0", + "mdast-util-to-string": "^1.1.0", + "rehype-parse": "^7.0.1", + "rehype-stringify": "^8.0.0", + "remark-admonitions": "github:josh-heyer/remark-admonitions", + "remark-frontmatter": "^2.0.0", + "remark-mdx": "^1.6.22", + "remark-rehype": "^8.0.0", + "remark-stringify": "^8.1.1", + "to-vfile": "^6.1.0", + "unified": "^9.2.2", + "unist-util-visit": "^2.0.3", + "unist-util-visit-parents": "^5.1.3" + } + }, + "node_modules/@actions/core": { + "version": "1.10.1", + "resolved": "https://registry.npmjs.org/@actions/core/-/core-1.10.1.tgz", + "integrity": "sha512-3lBR9EDAY+iYIpTnTIXmWcNbX3T2kCkAEQGIQx4NVQ0575nk2k3GRZDTPQG+vVtS2izSLmINlxXf0uLtnrTP+g==", + "dependencies": { + "@actions/http-client": "^2.0.1", + "uuid": "^8.3.2" + } + }, + "node_modules/@actions/github": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/@actions/github/-/github-6.0.0.tgz", + "integrity": "sha512-alScpSVnYmjNEXboZjarjukQEzgCRmjMv6Xj47fsdnqGS73bjJNDpiiXmp8jr0UZLdUB6d9jW63IcmddUP+l0g==", + "dependencies": { + "@actions/http-client": "^2.2.0", + "@octokit/core": "^5.0.1", + "@octokit/plugin-paginate-rest": "^9.0.0", + "@octokit/plugin-rest-endpoint-methods": "^10.0.0" + } + }, + "node_modules/@actions/http-client": { + "version": "2.2.1", + "resolved": "https://registry.npmjs.org/@actions/http-client/-/http-client-2.2.1.tgz", + "integrity": "sha512-KhC/cZsq7f8I4LfZSJKgCvEwfkE8o1538VoBeoGzokVLLnbFDEAdFD3UhoMklxo2un9NJVBdANOresx7vTHlHw==", + "dependencies": { + "tunnel": "^0.0.6", + "undici": "^5.25.4" + } + }, + "node_modules/@babel/code-frame": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.24.7.tgz", + "integrity": "sha512-BcYH1CVJBO9tvyIZ2jVeXgSIMvGZ2FDRvDdOIVQyuklNKSsx+eppDEBq/g47Ayw+RqNFE+URvOShmf+f/qwAlA==", + "dependencies": { + "@babel/highlight": "^7.24.7", + "picocolors": "^1.0.0" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/core": { + "version": "7.12.9", + "resolved": "https://registry.npmjs.org/@babel/core/-/core-7.12.9.tgz", + "integrity": "sha512-gTXYh3M5wb7FRXQy+FErKFAv90BnlOuNn1QkCK2lREoPAjrQCO49+HVSrFoe5uakFAF5eenS75KbO2vQiLrTMQ==", + "dependencies": { + "@babel/code-frame": "^7.10.4", + "@babel/generator": "^7.12.5", + "@babel/helper-module-transforms": "^7.12.1", + "@babel/helpers": "^7.12.5", + "@babel/parser": "^7.12.7", + "@babel/template": "^7.12.7", + "@babel/traverse": "^7.12.9", + "@babel/types": "^7.12.7", + "convert-source-map": "^1.7.0", + "debug": "^4.1.0", + "gensync": "^1.0.0-beta.1", + "json5": "^2.1.2", + "lodash": "^4.17.19", + "resolve": "^1.3.2", + "semver": "^5.4.1", + "source-map": "^0.5.0" + }, + "engines": { + "node": ">=6.9.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/babel" + } + }, + "node_modules/@babel/generator": { + "version": "7.24.10", + "resolved": "https://registry.npmjs.org/@babel/generator/-/generator-7.24.10.tgz", + "integrity": "sha512-o9HBZL1G2129luEUlG1hB4N/nlYNWHnpwlND9eOMclRqqu1YDy2sSYVCFUZwl8I1Gxh+QSRrP2vD7EpUmFVXxg==", + "dependencies": { + "@babel/types": "^7.24.9", + "@jridgewell/gen-mapping": "^0.3.5", + "@jridgewell/trace-mapping": "^0.3.25", + "jsesc": "^2.5.1" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/helper-environment-visitor": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/helper-environment-visitor/-/helper-environment-visitor-7.24.7.tgz", + "integrity": "sha512-DoiN84+4Gnd0ncbBOM9AZENV4a5ZiL39HYMyZJGZ/AZEykHYdJw0wW3kdcsh9/Kn+BRXHLkkklZ51ecPKmI1CQ==", + "dependencies": { + "@babel/types": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/helper-function-name": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/helper-function-name/-/helper-function-name-7.24.7.tgz", + "integrity": "sha512-FyoJTsj/PEUWu1/TYRiXTIHc8lbw+TDYkZuoE43opPS5TrI7MyONBE1oNvfguEXAD9yhQRrVBnXdXzSLQl9XnA==", + "dependencies": { + "@babel/template": "^7.24.7", + "@babel/types": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/helper-hoist-variables": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/helper-hoist-variables/-/helper-hoist-variables-7.24.7.tgz", + "integrity": "sha512-MJJwhkoGy5c4ehfoRyrJ/owKeMl19U54h27YYftT0o2teQ3FJ3nQUf/I3LlJsX4l3qlw7WRXUmiyajvHXoTubQ==", + "dependencies": { + "@babel/types": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/helper-module-imports": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/helper-module-imports/-/helper-module-imports-7.24.7.tgz", + "integrity": "sha512-8AyH3C+74cgCVVXow/myrynrAGv+nTVg5vKu2nZph9x7RcRwzmh0VFallJuFTZ9mx6u4eSdXZfcOzSqTUm0HCA==", + "dependencies": { + "@babel/traverse": "^7.24.7", + "@babel/types": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/helper-module-transforms": { + "version": "7.24.9", + "resolved": "https://registry.npmjs.org/@babel/helper-module-transforms/-/helper-module-transforms-7.24.9.tgz", + "integrity": "sha512-oYbh+rtFKj/HwBQkFlUzvcybzklmVdVV3UU+mN7n2t/q3yGHbuVdNxyFvSBO1tfvjyArpHNcWMAzsSPdyI46hw==", + "dependencies": { + "@babel/helper-environment-visitor": "^7.24.7", + "@babel/helper-module-imports": "^7.24.7", + "@babel/helper-simple-access": "^7.24.7", + "@babel/helper-split-export-declaration": "^7.24.7", + "@babel/helper-validator-identifier": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0" + } + }, + "node_modules/@babel/helper-plugin-utils": { + "version": "7.10.4", + "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.10.4.tgz", + "integrity": "sha512-O4KCvQA6lLiMU9l2eawBPMf1xPP8xPfB3iEQw150hOVTqj/rfXz0ThTb4HEzqQfs2Bmo5Ay8BzxfzVtBrr9dVg==" + }, + "node_modules/@babel/helper-simple-access": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/helper-simple-access/-/helper-simple-access-7.24.7.tgz", + "integrity": "sha512-zBAIvbCMh5Ts+b86r/CjU+4XGYIs+R1j951gxI3KmmxBMhCg4oQMsv6ZXQ64XOm/cvzfU1FmoCyt6+owc5QMYg==", + "dependencies": { + "@babel/traverse": "^7.24.7", + "@babel/types": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/helper-split-export-declaration": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/helper-split-export-declaration/-/helper-split-export-declaration-7.24.7.tgz", + "integrity": "sha512-oy5V7pD+UvfkEATUKvIjvIAH/xCzfsFVw7ygW2SI6NClZzquT+mwdTfgfdbUiceh6iQO0CHtCPsyze/MZ2YbAA==", + "dependencies": { + "@babel/types": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/helper-string-parser": { + "version": "7.24.8", + "resolved": "https://registry.npmjs.org/@babel/helper-string-parser/-/helper-string-parser-7.24.8.tgz", + "integrity": "sha512-pO9KhhRcuUyGnJWwyEgnRJTSIZHiT+vMD0kPeD+so0l7mxkMT19g3pjY9GTnHySck/hDzq+dtW/4VgnMkippsQ==", + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/helper-validator-identifier": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/helper-validator-identifier/-/helper-validator-identifier-7.24.7.tgz", + "integrity": "sha512-rR+PBcQ1SMQDDyF6X0wxtG8QyLCgUB0eRAGguqRLfkCA87l7yAP7ehq8SNj96OOGTO8OBV70KhuFYcIkHXOg0w==", + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/helpers": { + "version": "7.24.8", + "resolved": "https://registry.npmjs.org/@babel/helpers/-/helpers-7.24.8.tgz", + "integrity": "sha512-gV2265Nkcz7weJJfvDoAEVzC1e2OTDpkGbEsebse8koXUJUXPsCMi7sRo/+SPMuMZ9MtUPnGwITTnQnU5YjyaQ==", + "dependencies": { + "@babel/template": "^7.24.7", + "@babel/types": "^7.24.8" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/highlight": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/highlight/-/highlight-7.24.7.tgz", + "integrity": "sha512-EStJpq4OuY8xYfhGVXngigBJRWxftKX9ksiGDnmlY3o7B/V7KIAc9X4oiK87uPJSc/vs5L869bem5fhZa8caZw==", + "dependencies": { + "@babel/helper-validator-identifier": "^7.24.7", + "chalk": "^2.4.2", + "js-tokens": "^4.0.0", + "picocolors": "^1.0.0" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/parser": { + "version": "7.24.8", + "resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.24.8.tgz", + "integrity": "sha512-WzfbgXOkGzZiXXCqk43kKwZjzwx4oulxZi3nq2TYL9mOjQv6kYwul9mz6ID36njuL7Xkp6nJEfok848Zj10j/w==", + "bin": { + "parser": "bin/babel-parser.js" + }, + "engines": { + "node": ">=6.0.0" + } + }, + "node_modules/@babel/plugin-proposal-object-rest-spread": { + "version": "7.12.1", + "resolved": "https://registry.npmjs.org/@babel/plugin-proposal-object-rest-spread/-/plugin-proposal-object-rest-spread-7.12.1.tgz", + "integrity": "sha512-s6SowJIjzlhx8o7lsFx5zmY4At6CTtDvgNQDdPzkBQucle58A6b/TTeEBYtyDgmcXjUTM+vE8YOGHZzzbc/ioA==", + "deprecated": "This proposal has been merged to the ECMAScript standard and thus this plugin is no longer maintained. Please use @babel/plugin-transform-object-rest-spread instead.", + "dependencies": { + "@babel/helper-plugin-utils": "^7.10.4", + "@babel/plugin-syntax-object-rest-spread": "^7.8.0", + "@babel/plugin-transform-parameters": "^7.12.1" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-syntax-jsx": { + "version": "7.12.1", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-jsx/-/plugin-syntax-jsx-7.12.1.tgz", + "integrity": "sha512-1yRi7yAtB0ETgxdY9ti/p2TivUxJkTdhu/ZbF9MshVGqOx1TdB3b7xCXs49Fupgg50N45KcAsRP/ZqWjs9SRjg==", + "dependencies": { + "@babel/helper-plugin-utils": "^7.10.4" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-syntax-object-rest-spread": { + "version": "7.8.3", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-object-rest-spread/-/plugin-syntax-object-rest-spread-7.8.3.tgz", + "integrity": "sha512-XoqMijGZb9y3y2XskN+P1wUGiVwWZ5JmoDRwx5+3GmEplNyVM2s2Dg8ILFQm8rWM48orGy5YpI5Bl8U1y7ydlA==", + "dependencies": { + "@babel/helper-plugin-utils": "^7.8.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-parameters": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-parameters/-/plugin-transform-parameters-7.24.7.tgz", + "integrity": "sha512-yGWW5Rr+sQOhK0Ot8hjDJuxU3XLRQGflvT4lhlSY0DFvdb3TwKaY26CJzHtYllU0vT9j58hc37ndFPsqT1SrzA==", + "dependencies": { + "@babel/helper-plugin-utils": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-parameters/node_modules/@babel/helper-plugin-utils": { + "version": "7.24.8", + "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.24.8.tgz", + "integrity": "sha512-FFWx5142D8h2Mgr/iPVGH5G7w6jDn4jUSpZTyDnQO0Yn7Ks2Kuz6Pci8H6MPCoUJegd/UZQ3tAvfLCxQSnWWwg==", + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/template": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/template/-/template-7.24.7.tgz", + "integrity": "sha512-jYqfPrU9JTF0PmPy1tLYHW4Mp4KlgxJD9l2nP9fD6yT/ICi554DmrWBAEYpIelzjHf1msDP3PxJIRt/nFNfBig==", + "dependencies": { + "@babel/code-frame": "^7.24.7", + "@babel/parser": "^7.24.7", + "@babel/types": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/traverse": { + "version": "7.24.8", + "resolved": "https://registry.npmjs.org/@babel/traverse/-/traverse-7.24.8.tgz", + "integrity": "sha512-t0P1xxAPzEDcEPmjprAQq19NWum4K0EQPjMwZQZbHt+GiZqvjCHjj755Weq1YRPVzBI+3zSfvScfpnuIecVFJQ==", + "dependencies": { + "@babel/code-frame": "^7.24.7", + "@babel/generator": "^7.24.8", + "@babel/helper-environment-visitor": "^7.24.7", + "@babel/helper-function-name": "^7.24.7", + "@babel/helper-hoist-variables": "^7.24.7", + "@babel/helper-split-export-declaration": "^7.24.7", + "@babel/parser": "^7.24.8", + "@babel/types": "^7.24.8", + "debug": "^4.3.1", + "globals": "^11.1.0" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/types": { + "version": "7.24.9", + "resolved": "https://registry.npmjs.org/@babel/types/-/types-7.24.9.tgz", + "integrity": "sha512-xm8XrMKz0IlUdocVbYJe0Z9xEgidU7msskG8BbhnTPK/HZ2z/7FP7ykqPgrUH+C+r414mNfNWam1f2vqOjqjYQ==", + "dependencies": { + "@babel/helper-string-parser": "^7.24.8", + "@babel/helper-validator-identifier": "^7.24.7", + "to-fast-properties": "^2.0.0" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@fastify/busboy": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/@fastify/busboy/-/busboy-2.1.1.tgz", + "integrity": "sha512-vBZP4NlzfOlerQTnba4aqZoMhE/a9HY7HRqoOPaETQcSQuWEIyZMHGfVu6w9wGtGK5fED5qRs2DteVCjOH60sA==", + "engines": { + "node": ">=14" + } + }, + "node_modules/@jridgewell/gen-mapping": { + "version": "0.3.5", + "resolved": "https://registry.npmjs.org/@jridgewell/gen-mapping/-/gen-mapping-0.3.5.tgz", + "integrity": "sha512-IzL8ZoEDIBRWEzlCcRhOaCupYyN5gdIK+Q6fbFdPDg6HqX6jpkItn7DFIpW9LQzXG6Df9sA7+OKnq0qlz/GaQg==", + "dependencies": { + "@jridgewell/set-array": "^1.2.1", + "@jridgewell/sourcemap-codec": "^1.4.10", + "@jridgewell/trace-mapping": "^0.3.24" + }, + "engines": { + "node": ">=6.0.0" + } + }, + "node_modules/@jridgewell/resolve-uri": { + "version": "3.1.2", + "resolved": "https://registry.npmjs.org/@jridgewell/resolve-uri/-/resolve-uri-3.1.2.tgz", + "integrity": "sha512-bRISgCIjP20/tbWSPWMEi54QVPRZExkuD9lJL+UIxUKtwVJA8wW1Trb1jMs1RFXo1CBTNZ/5hpC9QvmKWdopKw==", + "engines": { + "node": ">=6.0.0" + } + }, + "node_modules/@jridgewell/set-array": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/@jridgewell/set-array/-/set-array-1.2.1.tgz", + "integrity": "sha512-R8gLRTZeyp03ymzP/6Lil/28tGeGEzhx1q2k703KGWRAI1VdvPIXdG70VJc2pAMw3NA6JKL5hhFu1sJX0Mnn/A==", + "engines": { + "node": ">=6.0.0" + } + }, + "node_modules/@jridgewell/sourcemap-codec": { + "version": "1.5.0", + "resolved": "https://registry.npmjs.org/@jridgewell/sourcemap-codec/-/sourcemap-codec-1.5.0.tgz", + "integrity": "sha512-gv3ZRaISU3fjPAgNsriBRqGWQL6quFx04YMPW/zD8XMLsU32mhCCbfbO6KZFLjvYpCZ8zyDEgqsgf+PwPaM7GQ==" + }, + "node_modules/@jridgewell/trace-mapping": { + "version": "0.3.25", + "resolved": "https://registry.npmjs.org/@jridgewell/trace-mapping/-/trace-mapping-0.3.25.tgz", + "integrity": "sha512-vNk6aEwybGtawWmy/PzwnGDOjCkLWSD2wqvjGGAgOAwCGWySYXfYoxt00IJkTF+8Lb57DwOb3Aa0o9CApepiYQ==", + "dependencies": { + "@jridgewell/resolve-uri": "^3.1.0", + "@jridgewell/sourcemap-codec": "^1.4.14" + } + }, + "node_modules/@mdx-js/util": { + "version": "1.6.22", + "resolved": "https://registry.npmjs.org/@mdx-js/util/-/util-1.6.22.tgz", + "integrity": "sha512-H1rQc1ZOHANWBvPcW+JpGwr+juXSxM8Q8YCkm3GhZd8REu1fHR3z99CErO1p9pkcfcxZnMdIZdIsXkOHY0NilA==", + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/@nodelib/fs.scandir": { + "version": "2.1.5", + "resolved": "https://registry.npmjs.org/@nodelib/fs.scandir/-/fs.scandir-2.1.5.tgz", + "integrity": "sha512-vq24Bq3ym5HEQm2NKCr3yXDwjc7vTsEThRDnkp2DK9p1uqLR+DHurm/NOTo0KG7HYHU7eppKZj3MyqYuMBf62g==", + "dependencies": { + "@nodelib/fs.stat": "2.0.5", + "run-parallel": "^1.1.9" + }, + "engines": { + "node": ">= 8" + } + }, + "node_modules/@nodelib/fs.stat": { + "version": "2.0.5", + "resolved": "https://registry.npmjs.org/@nodelib/fs.stat/-/fs.stat-2.0.5.tgz", + "integrity": "sha512-RkhPPp2zrqDAQA/2jNhnztcPAlv64XdhIp7a7454A5ovI7Bukxgt7MX7udwAu3zg1DcpPU0rz3VV1SeaqvY4+A==", + "engines": { + "node": ">= 8" + } + }, + "node_modules/@nodelib/fs.walk": { + "version": "1.2.8", + "resolved": "https://registry.npmjs.org/@nodelib/fs.walk/-/fs.walk-1.2.8.tgz", + "integrity": "sha512-oGB+UxlgWcgQkgwo8GcEGwemoTFt3FIO9ababBmaGwXIoBKZ+GTy0pP185beGg7Llih/NSHSV2XAs1lnznocSg==", + "dependencies": { + "@nodelib/fs.scandir": "2.1.5", + "fastq": "^1.6.0" + }, + "engines": { + "node": ">= 8" + } + }, + "node_modules/@octokit/auth-token": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/@octokit/auth-token/-/auth-token-4.0.0.tgz", + "integrity": "sha512-tY/msAuJo6ARbK6SPIxZrPBms3xPbfwBrulZe0Wtr/DIY9lje2HeV1uoebShn6mx7SjCHif6EjMvoREj+gZ+SA==", + "engines": { + "node": ">= 18" + } + }, + "node_modules/@octokit/core": { + "version": "5.1.0", + "resolved": "https://registry.npmjs.org/@octokit/core/-/core-5.1.0.tgz", + "integrity": "sha512-BDa2VAMLSh3otEiaMJ/3Y36GU4qf6GI+VivQ/P41NC6GHcdxpKlqV0ikSZ5gdQsmS3ojXeRx5vasgNTinF0Q4g==", + "dependencies": { + "@octokit/auth-token": "^4.0.0", + "@octokit/graphql": "^7.0.0", + "@octokit/request": "^8.0.2", + "@octokit/request-error": "^5.0.0", + "@octokit/types": "^12.0.0", + "before-after-hook": "^2.2.0", + "universal-user-agent": "^6.0.0" + }, + "engines": { + "node": ">= 18" + } + }, + "node_modules/@octokit/endpoint": { + "version": "9.0.4", + "resolved": "https://registry.npmjs.org/@octokit/endpoint/-/endpoint-9.0.4.tgz", + "integrity": "sha512-DWPLtr1Kz3tv8L0UvXTDP1fNwM0S+z6EJpRcvH66orY6Eld4XBMCSYsaWp4xIm61jTWxK68BrR7ibO+vSDnZqw==", + "dependencies": { + "@octokit/types": "^12.0.0", + "universal-user-agent": "^6.0.0" + }, + "engines": { + "node": ">= 18" + } + }, + "node_modules/@octokit/graphql": { + "version": "7.0.2", + "resolved": "https://registry.npmjs.org/@octokit/graphql/-/graphql-7.0.2.tgz", + "integrity": "sha512-OJ2iGMtj5Tg3s6RaXH22cJcxXRi7Y3EBqbHTBRq+PQAqfaS8f/236fUrWhfSn8P4jovyzqucxme7/vWSSZBX2Q==", + "dependencies": { + "@octokit/request": "^8.0.1", + "@octokit/types": "^12.0.0", + "universal-user-agent": "^6.0.0" + }, + "engines": { + "node": ">= 18" + } + }, + "node_modules/@octokit/openapi-types": { + "version": "20.0.0", + "resolved": "https://registry.npmjs.org/@octokit/openapi-types/-/openapi-types-20.0.0.tgz", + "integrity": "sha512-EtqRBEjp1dL/15V7WiX5LJMIxxkdiGJnabzYx5Apx4FkQIFgAfKumXeYAqqJCj1s+BMX4cPFIFC4OLCR6stlnA==" + }, + "node_modules/@octokit/plugin-paginate-rest": { + "version": "9.2.1", + "resolved": "https://registry.npmjs.org/@octokit/plugin-paginate-rest/-/plugin-paginate-rest-9.2.1.tgz", + "integrity": "sha512-wfGhE/TAkXZRLjksFXuDZdmGnJQHvtU/joFQdweXUgzo1XwvBCD4o4+75NtFfjfLK5IwLf9vHTfSiU3sLRYpRw==", + "dependencies": { + "@octokit/types": "^12.6.0" + }, + "engines": { + "node": ">= 18" + }, + "peerDependencies": { + "@octokit/core": "5" + } + }, + "node_modules/@octokit/plugin-rest-endpoint-methods": { + "version": "10.4.1", + "resolved": "https://registry.npmjs.org/@octokit/plugin-rest-endpoint-methods/-/plugin-rest-endpoint-methods-10.4.1.tgz", + "integrity": "sha512-xV1b+ceKV9KytQe3zCVqjg+8GTGfDYwaT1ATU5isiUyVtlVAO3HNdzpS4sr4GBx4hxQ46s7ITtZrAsxG22+rVg==", + "dependencies": { + "@octokit/types": "^12.6.0" + }, + "engines": { + "node": ">= 18" + }, + "peerDependencies": { + "@octokit/core": "5" + } + }, + "node_modules/@octokit/request": { + "version": "8.2.0", + "resolved": "https://registry.npmjs.org/@octokit/request/-/request-8.2.0.tgz", + "integrity": "sha512-exPif6x5uwLqv1N1irkLG1zZNJkOtj8bZxuVHd71U5Ftuxf2wGNvAJyNBcPbPC+EBzwYEbBDdSFb8EPcjpYxPQ==", + "dependencies": { + "@octokit/endpoint": "^9.0.0", + "@octokit/request-error": "^5.0.0", + "@octokit/types": "^12.0.0", + "universal-user-agent": "^6.0.0" + }, + "engines": { + "node": ">= 18" + } + }, + "node_modules/@octokit/request-error": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/@octokit/request-error/-/request-error-5.0.1.tgz", + "integrity": "sha512-X7pnyTMV7MgtGmiXBwmO6M5kIPrntOXdyKZLigNfQWSEQzVxR4a4vo49vJjTWX70mPndj8KhfT4Dx+2Ng3vnBQ==", + "dependencies": { + "@octokit/types": "^12.0.0", + "deprecation": "^2.0.0", + "once": "^1.4.0" + }, + "engines": { + "node": ">= 18" + } + }, + "node_modules/@octokit/types": { + "version": "12.6.0", + "resolved": "https://registry.npmjs.org/@octokit/types/-/types-12.6.0.tgz", + "integrity": "sha512-1rhSOfRa6H9w4YwK0yrf5faDaDTb+yLyBUKOCV4xtCDB5VmIPqd/v9yr9o6SAzOAlRxMiRiCic6JVM1/kunVkw==", + "dependencies": { + "@octokit/openapi-types": "^20.0.0" + } + }, + "node_modules/@types/hast": { + "version": "2.3.10", + "resolved": "https://registry.npmjs.org/@types/hast/-/hast-2.3.10.tgz", + "integrity": "sha512-McWspRw8xx8J9HurkVBfYj0xKoE25tOFlHGdx4MJ5xORQrMGZNqJhVQWaIbm6Oyla5kYOXtDiopzKRJzEOkwJw==", + "dependencies": { + "@types/unist": "^2" + } + }, + "node_modules/@types/hast/node_modules/@types/unist": { + "version": "2.0.10", + "resolved": "https://registry.npmjs.org/@types/unist/-/unist-2.0.10.tgz", + "integrity": "sha512-IfYcSBWE3hLpBg8+X2SEa8LVkJdJEkT2Ese2aaLs3ptGdVtABxndrMaxuFlQ1qdFf9Q5rDvDpxI3WwgvKFAsQA==" + }, + "node_modules/@types/parse5": { + "version": "5.0.3", + "resolved": "https://registry.npmjs.org/@types/parse5/-/parse5-5.0.3.tgz", + "integrity": "sha512-kUNnecmtkunAoQ3CnjmMkzNU/gtxG8guhi+Fk2U/kOpIKjIMKnXGp4IJCgQJrXSgMsWYimYG4TGjz/UzbGEBTw==" + }, + "node_modules/ansi-styles": { + "version": "3.2.1", + "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-3.2.1.tgz", + "integrity": "sha512-VT0ZI6kZRdTh8YyJw3SMbYm/u+NqfsAxEpWO0Pf9sq8/e94WxxOpPKx9FR1FlyCtOVDNOQ+8ntlqFxiRc+r5qA==", + "dependencies": { + "color-convert": "^1.9.0" + }, + "engines": { + "node": ">=4" + } + }, + "node_modules/argparse": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/argparse/-/argparse-2.0.1.tgz", + "integrity": "sha512-8+9WqebbFzpX9OR+Wa6O29asIogeRMzcGtAINdpMHHyAg10f05aSFVBbcEqGf/PXw1EjAZ+q2/bEBg3DvurK3Q==" + }, + "node_modules/bail": { + "version": "1.0.5", + "resolved": "https://registry.npmjs.org/bail/-/bail-1.0.5.tgz", + "integrity": "sha512-xFbRxM1tahm08yHBP16MMjVUAvDaBMD38zsM9EMAUN61omwLmKlOpB/Zku5QkjZ8TZ4vn53pj+t518cH0S03RQ==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/before-after-hook": { + "version": "2.2.3", + "resolved": "https://registry.npmjs.org/before-after-hook/-/before-after-hook-2.2.3.tgz", + "integrity": "sha512-NzUnlZexiaH/46WDhANlyR2bXRopNg4F/zuSA3OpZnllCUgRaOF2znDioDWrmbNVsuZk6l9pMquQB38cfBZwkQ==" + }, + "node_modules/braces": { + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/braces/-/braces-3.0.3.tgz", + "integrity": "sha512-yQbXgO/OSZVD2IsiLlro+7Hf6Q18EJrKSEsdoMzKePKXct3gvD8oLcOQdIzGupr5Fj+EDe8gO/lxc1BzfMpxvA==", + "dependencies": { + "fill-range": "^7.1.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/ccount": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/ccount/-/ccount-1.1.0.tgz", + "integrity": "sha512-vlNK021QdI7PNeiUh/lKkC/mNHHfV0m/Ad5JoI0TYtlBnJAslM/JIkm/tGC88bkLIwO6OQ5uV6ztS6kVAtCDlg==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/chalk": { + "version": "2.4.2", + "resolved": "https://registry.npmjs.org/chalk/-/chalk-2.4.2.tgz", + "integrity": "sha512-Mti+f9lpJNcwF4tWV8/OrTTtF1gZi+f8FqlyAdouralcFWFQWF2+NgCHShjkCb+IFBLq9buZwE1xckQU4peSuQ==", + "dependencies": { + "ansi-styles": "^3.2.1", + "escape-string-regexp": "^1.0.5", + "supports-color": "^5.3.0" + }, + "engines": { + "node": ">=4" + } + }, + "node_modules/chalk/node_modules/escape-string-regexp": { + "version": "1.0.5", + "resolved": "https://registry.npmjs.org/escape-string-regexp/-/escape-string-regexp-1.0.5.tgz", + "integrity": "sha512-vbRorB5FUQWvla16U8R/qgaFIya2qGzwDrNmCZuYKrbdSUMG6I1ZCGQRefkRVhuOkIGVne7BQ35DSfo1qvJqFg==", + "engines": { + "node": ">=0.8.0" + } + }, + "node_modules/character-entities-html4": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/character-entities-html4/-/character-entities-html4-1.1.4.tgz", + "integrity": "sha512-HRcDxZuZqMx3/a+qrzxdBKBPUpxWEq9xw2OPZ3a/174ihfrQKVsFhqtthBInFy1zZ9GgZyFXOatNujm8M+El3g==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/character-entities-legacy": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/character-entities-legacy/-/character-entities-legacy-1.1.4.tgz", + "integrity": "sha512-3Xnr+7ZFS1uxeiUDvV02wQ+QDbc55o97tIV5zHScSPJpcLm/r0DFPcoY3tYRp+VZukxuMeKgXYmsXQHO05zQeA==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/character-reference-invalid": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/character-reference-invalid/-/character-reference-invalid-1.1.4.tgz", + "integrity": "sha512-mKKUkUbhPpQlCOfIuZkvSEgktjPFIsZKRRbC6KWVEMvlzblj3i3asQv5ODsrwt0N3pHAEvjP8KTQPHkp0+6jOg==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/collapse-white-space": { + "version": "1.0.6", + "resolved": "https://registry.npmjs.org/collapse-white-space/-/collapse-white-space-1.0.6.tgz", + "integrity": "sha512-jEovNnrhMuqyCcjfEJA56v0Xq8SkIoPKDyaHahwo3POf4qcSXqMYuwNcOTzp74vTsR9Tn08z4MxWqAhcekogkQ==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/color-convert": { + "version": "1.9.3", + "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-1.9.3.tgz", + "integrity": "sha512-QfAUtd+vFdAtFQcC8CCyYt1fYWxSqAiK2cSD6zDB8N3cpsEBAvRxp9zOGg6G/SHHJYAT88/az/IuDGALsNVbGg==", + "dependencies": { + "color-name": "1.1.3" + } + }, + "node_modules/color-name": { + "version": "1.1.3", + "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.3.tgz", + "integrity": "sha512-72fSenhMw2HZMTVHeCA9KCmpEIbzWiQsjN+BHcBbS9vr1mtt+vJjPdksIBNUmKAW8TFUDPJK5SUU3QhE9NEXDw==" + }, + "node_modules/comma-separated-tokens": { + "version": "1.0.8", + "resolved": "https://registry.npmjs.org/comma-separated-tokens/-/comma-separated-tokens-1.0.8.tgz", + "integrity": "sha512-GHuDRO12Sypu2cV70d1dkA2EUmXHgntrzbpvOB+Qy+49ypNfGgFQIC2fhhXbnyrJRynDCAARsT7Ou0M6hirpfw==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/convert-source-map": { + "version": "1.9.0", + "resolved": "https://registry.npmjs.org/convert-source-map/-/convert-source-map-1.9.0.tgz", + "integrity": "sha512-ASFBup0Mz1uyiIjANan1jzLQami9z1PoYSZCiiYW2FczPbenXc45FZdBZLzOT+r6+iciuEModtmCti+hjaAk0A==" + }, + "node_modules/debug": { + "version": "4.3.4", + "resolved": "https://registry.npmjs.org/debug/-/debug-4.3.4.tgz", + "integrity": "sha512-PRWFHuSU3eDtQJPvnNY7Jcket1j0t5OuOsFzPPzsekD52Zl8qUfFIPEiswXqIvHWGVHOgX+7G/vCNNhehwxfkQ==", + "dependencies": { + "ms": "2.1.2" + }, + "engines": { + "node": ">=6.0" + }, + "peerDependenciesMeta": { + "supports-color": { + "optional": true + } + } + }, + "node_modules/deprecation": { + "version": "2.3.1", + "resolved": "https://registry.npmjs.org/deprecation/-/deprecation-2.3.1.tgz", + "integrity": "sha512-xmHIy4F3scKVwMsQ4WnVaS8bHOx0DmVwRywosKhaILI0ywMDWPtBSku2HNxRvF7jtwDRsoEwYQSfbxj8b7RlJQ==" + }, + "node_modules/extend": { + "version": "3.0.2", + "resolved": "https://registry.npmjs.org/extend/-/extend-3.0.2.tgz", + "integrity": "sha512-fjquC59cD7CyW6urNXK0FBufkZcoiGG80wTuPujX590cB5Ttln20E2UB4S/WARVqhXffZl2LNgS+gQdPIIim/g==" + }, + "node_modules/fast-glob": { + "version": "3.3.2", + "resolved": "https://registry.npmjs.org/fast-glob/-/fast-glob-3.3.2.tgz", + "integrity": "sha512-oX2ruAFQwf/Orj8m737Y5adxDQO0LAB7/S5MnxCdTNDd4p6BsyIVsv9JQsATbTSq8KHRpLwIHbVlUNatxd+1Ow==", + "dependencies": { + "@nodelib/fs.stat": "^2.0.2", + "@nodelib/fs.walk": "^1.2.3", + "glob-parent": "^5.1.2", + "merge2": "^1.3.0", + "micromatch": "^4.0.4" + }, + "engines": { + "node": ">=8.6.0" + } + }, + "node_modules/fastq": { + "version": "1.17.1", + "resolved": "https://registry.npmjs.org/fastq/-/fastq-1.17.1.tgz", + "integrity": "sha512-sRVD3lWVIXWg6By68ZN7vho9a1pQcN/WBFaAAsDDFzlJjvoGx0P8z7V1t72grFJfJhu3YPZBuu25f7Kaw2jN1w==", + "dependencies": { + "reusify": "^1.0.4" + } + }, + "node_modules/fault": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/fault/-/fault-1.0.4.tgz", + "integrity": "sha512-CJ0HCB5tL5fYTEA7ToAq5+kTwd++Borf1/bifxd9iT70QcXr4MRrO3Llf8Ifs70q+SJcGHFtnIE/Nw6giCtECA==", + "dependencies": { + "format": "^0.2.0" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/fill-range": { + "version": "7.1.1", + "resolved": "https://registry.npmjs.org/fill-range/-/fill-range-7.1.1.tgz", + "integrity": "sha512-YsGpe3WHLK8ZYi4tWDg2Jy3ebRz2rXowDxnld4bkQB00cc/1Zw9AWnC0i9ztDJitivtQvaI9KaLyKrc+hBW0yg==", + "dependencies": { + "to-regex-range": "^5.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/format": { + "version": "0.2.2", + "resolved": "https://registry.npmjs.org/format/-/format-0.2.2.tgz", + "integrity": "sha512-wzsgA6WOq+09wrU1tsJ09udeR/YZRaeArL9e1wPbFg3GG2yDnC2ldKpxs4xunpFF9DgqCqOIra3bc1HWrJ37Ww==", + "engines": { + "node": ">=0.4.x" + } + }, + "node_modules/function-bind": { + "version": "1.1.2", + "resolved": "https://registry.npmjs.org/function-bind/-/function-bind-1.1.2.tgz", + "integrity": "sha512-7XHNxH7qX9xG5mIwxkhumTox/MIRNcOgDrxWsMt2pAr23WHp6MrRlN7FBSFpCpr+oVO0F744iUgR82nJMfG2SA==", + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/gensync": { + "version": "1.0.0-beta.2", + "resolved": "https://registry.npmjs.org/gensync/-/gensync-1.0.0-beta.2.tgz", + "integrity": "sha512-3hN7NaskYvMDLQY55gnW3NQ+mesEAepTqlg+VEbj7zzqEMBVNhzcGYYeqFo/TlYz6eQiFcp1HcsCZO+nGgS8zg==", + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/github-slugger": { + "version": "1.5.0", + "resolved": "https://registry.npmjs.org/github-slugger/-/github-slugger-1.5.0.tgz", + "integrity": "sha512-wIh+gKBI9Nshz2o46B0B3f5k/W+WI9ZAv6y5Dn5WJ5SK1t0TnDimB4WE5rmTD05ZAIn8HALCZVmCsvj0w0v0lw==" + }, + "node_modules/glob-parent": { + "version": "5.1.2", + "resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.2.tgz", + "integrity": "sha512-AOIgSQCepiJYwP3ARnGx+5VnTu2HBYdzbGP45eLw1vr3zB3vZLeyed1sC9hnbcOc9/SrMyM5RPQrkGz4aS9Zow==", + "dependencies": { + "is-glob": "^4.0.1" + }, + "engines": { + "node": ">= 6" + } + }, + "node_modules/globals": { + "version": "11.12.0", + "resolved": "https://registry.npmjs.org/globals/-/globals-11.12.0.tgz", + "integrity": "sha512-WOBp/EEGUiIsJSp7wcv/y6MO+lV9UoncWqxuFfm8eBwzWNgyfBd6Gz+IeKQ9jCmyhoH99g15M3T+QaVHFjizVA==", + "engines": { + "node": ">=4" + } + }, + "node_modules/has-flag": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-3.0.0.tgz", + "integrity": "sha512-sKJf1+ceQBr4SMkvQnBDNDtf4TXpVhVGateu0t918bl30FnbE2m4vNLX+VWe/dpjlb+HugGYzW7uQXH98HPEYw==", + "engines": { + "node": ">=4" + } + }, + "node_modules/hasown": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/hasown/-/hasown-2.0.2.tgz", + "integrity": "sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ==", + "dependencies": { + "function-bind": "^1.1.2" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/hast-util-from-parse5": { + "version": "6.0.1", + "resolved": "https://registry.npmjs.org/hast-util-from-parse5/-/hast-util-from-parse5-6.0.1.tgz", + "integrity": "sha512-jeJUWiN5pSxW12Rh01smtVkZgZr33wBokLzKLwinYOUfSzm1Nl/c3GUGebDyOKjdsRgMvoVbV0VpAcpjF4NrJA==", + "dependencies": { + "@types/parse5": "^5.0.0", + "hastscript": "^6.0.0", + "property-information": "^5.0.0", + "vfile": "^4.0.0", + "vfile-location": "^3.2.0", + "web-namespaces": "^1.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/hast-util-is-element": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/hast-util-is-element/-/hast-util-is-element-1.1.0.tgz", + "integrity": "sha512-oUmNua0bFbdrD/ELDSSEadRVtWZOf3iF6Lbv81naqsIV99RnSCieTbWuWCY8BAeEfKJTKl0gRdokv+dELutHGQ==", + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/hast-util-parse-selector": { + "version": "2.2.5", + "resolved": "https://registry.npmjs.org/hast-util-parse-selector/-/hast-util-parse-selector-2.2.5.tgz", + "integrity": "sha512-7j6mrk/qqkSehsM92wQjdIgWM2/BW61u/53G6xmC8i1OmEdKLHbk419QKQUjz6LglWsfqoiHmyMRkP1BGjecNQ==", + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/hast-util-to-html": { + "version": "7.1.3", + "resolved": "https://registry.npmjs.org/hast-util-to-html/-/hast-util-to-html-7.1.3.tgz", + "integrity": "sha512-yk2+1p3EJTEE9ZEUkgHsUSVhIpCsL/bvT8E5GzmWc+N1Po5gBw+0F8bo7dpxXR0nu0bQVxVZGX2lBGF21CmeDw==", + "dependencies": { + "ccount": "^1.0.0", + "comma-separated-tokens": "^1.0.0", + "hast-util-is-element": "^1.0.0", + "hast-util-whitespace": "^1.0.0", + "html-void-elements": "^1.0.0", + "property-information": "^5.0.0", + "space-separated-tokens": "^1.0.0", + "stringify-entities": "^3.0.1", + "unist-util-is": "^4.0.0", + "xtend": "^4.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/hast-util-to-html/node_modules/html-void-elements": { + "version": "1.0.5", + "resolved": "https://registry.npmjs.org/html-void-elements/-/html-void-elements-1.0.5.tgz", + "integrity": "sha512-uE/TxKuyNIcx44cIWnjr/rfIATDH7ZaOMmstu0CwhFG1Dunhlp4OC6/NMbhiwoq5BpW0ubi303qnEk/PZj614w==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/hast-util-to-html/node_modules/unist-util-is": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/unist-util-is/-/unist-util-is-4.1.0.tgz", + "integrity": "sha512-ZOQSsnce92GrxSqlnEEseX0gi7GH9zTJZ0p9dtu87WRb/37mMPO2Ilx1s/t9vBHrFhbgweUwb+t7cIn5dxPhZg==", + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/hast-util-whitespace": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/hast-util-whitespace/-/hast-util-whitespace-1.0.4.tgz", + "integrity": "sha512-I5GTdSfhYfAPNztx2xJRQpG8cuDSNt599/7YUn7Gx/WxNMsG+a835k97TDkFgk123cwjfwINaZknkKkphx/f2A==", + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/hastscript": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/hastscript/-/hastscript-6.0.0.tgz", + "integrity": "sha512-nDM6bvd7lIqDUiYEiu5Sl/+6ReP0BMk/2f4U/Rooccxkj0P5nm+acM5PrGJ/t5I8qPGiqZSE6hVAwZEdZIvP4w==", + "dependencies": { + "@types/hast": "^2.0.0", + "comma-separated-tokens": "^1.0.0", + "hast-util-parse-selector": "^2.0.0", + "property-information": "^5.0.0", + "space-separated-tokens": "^1.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/html-void-elements": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/html-void-elements/-/html-void-elements-2.0.1.tgz", + "integrity": "sha512-0quDb7s97CfemeJAnW9wC0hw78MtW7NU3hqtCD75g2vFlDLt36llsYD7uB7SUzojLMP24N5IatXf7ylGXiGG9A==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/inherits": { + "version": "2.0.4", + "resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.4.tgz", + "integrity": "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==" + }, + "node_modules/is-absolute-url": { + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/is-absolute-url/-/is-absolute-url-3.0.3.tgz", + "integrity": "sha512-opmNIX7uFnS96NtPmhWQgQx6/NYFgsUXYMllcfzwWKUMwfo8kku1TvE6hkNcH+Q1ts5cMVrsY7j0bxXQDciu9Q==", + "engines": { + "node": ">=8" + } + }, + "node_modules/is-alphabetical": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/is-alphabetical/-/is-alphabetical-1.0.4.tgz", + "integrity": "sha512-DwzsA04LQ10FHTZuL0/grVDk4rFoVH1pjAToYwBrHSxcrBIGQuXrQMtD5U1b0U2XVgKZCTLLP8u2Qxqhy3l2Vg==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/is-alphanumeric": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/is-alphanumeric/-/is-alphanumeric-1.0.0.tgz", + "integrity": "sha512-ZmRL7++ZkcMOfDuWZuMJyIVLr2keE1o/DeNWh1EmgqGhUcV+9BIVsx0BcSBOHTZqzjs4+dISzr2KAeBEWGgXeA==", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/is-alphanumerical": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/is-alphanumerical/-/is-alphanumerical-1.0.4.tgz", + "integrity": "sha512-UzoZUr+XfVz3t3v4KyGEniVL9BDRoQtY7tOyrRybkVNjDFWyo1yhXNGrrBTQxp3ib9BLAWs7k2YKBQsFRkZG9A==", + "dependencies": { + "is-alphabetical": "^1.0.0", + "is-decimal": "^1.0.0" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/is-buffer": { + "version": "2.0.5", + "resolved": "https://registry.npmjs.org/is-buffer/-/is-buffer-2.0.5.tgz", + "integrity": "sha512-i2R6zNFDwgEHJyQUtJEk0XFi1i0dPFn/oqjK3/vPCcDeJvW5NQ83V8QbicfF1SupOaB0h8ntgBC2YiE7dfyctQ==", + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/feross" + }, + { + "type": "patreon", + "url": "https://www.patreon.com/feross" + }, + { + "type": "consulting", + "url": "https://feross.org/support" + } + ], + "engines": { + "node": ">=4" + } + }, + "node_modules/is-core-module": { + "version": "2.14.0", + "resolved": "https://registry.npmjs.org/is-core-module/-/is-core-module-2.14.0.tgz", + "integrity": "sha512-a5dFJih5ZLYlRtDc0dZWP7RiKr6xIKzmn/oAYCDvdLThadVgyJwlaoQPmRtMSpz+rk0OGAgIu+TcM9HUF0fk1A==", + "dependencies": { + "hasown": "^2.0.2" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/is-decimal": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/is-decimal/-/is-decimal-1.0.4.tgz", + "integrity": "sha512-RGdriMmQQvZ2aqaQq3awNA6dCGtKpiDFcOzrTWrDAT2MiWrKQVPmxLGHl7Y2nNu6led0kEyoX0enY0qXYsv9zw==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/is-extglob": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/is-extglob/-/is-extglob-2.1.1.tgz", + "integrity": "sha512-SbKbANkN603Vi4jEZv49LeVJMn4yGwsbzZworEoyEiutsN3nJYdbO36zfhGJ6QEDpOZIFkDtnq5JRxmvl3jsoQ==", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/is-glob": { + "version": "4.0.3", + "resolved": "https://registry.npmjs.org/is-glob/-/is-glob-4.0.3.tgz", + "integrity": "sha512-xelSayHH36ZgE7ZWhli7pW34hNbNl8Ojv5KVmkJD4hBdD3th8Tfk9vYasLM+mXWOZhFkgZfxhLSnrwRr4elSSg==", + "dependencies": { + "is-extglob": "^2.1.1" + }, + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/is-hexadecimal": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/is-hexadecimal/-/is-hexadecimal-1.0.4.tgz", + "integrity": "sha512-gyPJuv83bHMpocVYoqof5VDiZveEoGoFL8m3BXNb2VW8Xs+rz9kqO8LOQ5DH6EsuvilT1ApazU0pyl+ytbPtlw==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/is-number": { + "version": "7.0.0", + "resolved": "https://registry.npmjs.org/is-number/-/is-number-7.0.0.tgz", + "integrity": "sha512-41Cifkg6e8TylSpdtTpeLVMqvSBEVzTttHvERD741+pnZ8ANv0004MRL43QKPDlK9cGvNp6NZWZUBlbGXYxxng==", + "engines": { + "node": ">=0.12.0" + } + }, + "node_modules/is-plain-obj": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/is-plain-obj/-/is-plain-obj-2.1.0.tgz", + "integrity": "sha512-YWnfyRwxL/+SsrWYfOpUtz5b3YD+nyfkHvjbcanzk8zgyO4ASD67uVMRt8k5bM4lLMDnXfriRhOpemw+NfT1eA==", + "engines": { + "node": ">=8" + } + }, + "node_modules/is-whitespace-character": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/is-whitespace-character/-/is-whitespace-character-1.0.4.tgz", + "integrity": "sha512-SDweEzfIZM0SJV0EUga669UTKlmL0Pq8Lno0QDQsPnvECB3IM2aP0gdx5TrU0A01MAPfViaZiI2V1QMZLaKK5w==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/is-word-character": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/is-word-character/-/is-word-character-1.0.4.tgz", + "integrity": "sha512-5SMO8RVennx3nZrqtKwCGyyetPE9VDba5ugvKLaD4KopPG5kR4mQ7tNt/r7feL5yt5h3lpuBbIUmCOG2eSzXHA==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/js-tokens": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/js-tokens/-/js-tokens-4.0.0.tgz", + "integrity": "sha512-RdJUflcE3cUzKiMqQgsCu06FPu9UdIJO0beYbPhHN4k6apgJtifcoCtT9bcxOpYBtpD2kCM6Sbzg4CausW/PKQ==" + }, + "node_modules/js-yaml": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/js-yaml/-/js-yaml-4.1.0.tgz", + "integrity": "sha512-wpxZs9NoxZaJESJGIZTyDEaYpl0FKSA+FB9aJiyemKhMwkxQg63h4T1KJgUGHpTqPDNRcmmYLugrRjJlBtWvRA==", + "dependencies": { + "argparse": "^2.0.1" + }, + "bin": { + "js-yaml": "bin/js-yaml.js" + } + }, + "node_modules/jsesc": { + "version": "2.5.2", + "resolved": "https://registry.npmjs.org/jsesc/-/jsesc-2.5.2.tgz", + "integrity": "sha512-OYu7XEzjkCQ3C5Ps3QIZsQfNpqoJyZZA99wd9aWd05NCtC5pWOkShK2mkL6HXQR6/Cy2lbNdPlZBpuQHXE63gA==", + "bin": { + "jsesc": "bin/jsesc" + }, + "engines": { + "node": ">=4" + } + }, + "node_modules/json5": { + "version": "2.2.3", + "resolved": "https://registry.npmjs.org/json5/-/json5-2.2.3.tgz", + "integrity": "sha512-XmOWe7eyHYH14cLdVPoyg+GOH3rYX++KpzrylJwSW98t3Nk+U8XOl8FWKOgwtzdb8lXGf6zYwDUzeHMWfxasyg==", + "bin": { + "json5": "lib/cli.js" + }, + "engines": { + "node": ">=6" + } + }, + "node_modules/lodash": { + "version": "4.17.21", + "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.21.tgz", + "integrity": "sha512-v2kDEe57lecTulaDIuNTPy3Ry4gLGJ6Z1O3vE1krgXZNrsQ+LFTGHVxVjcXPs17LhbZVGedAJv8XZ1tvj5FvSg==" + }, + "node_modules/markdown-escapes": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/markdown-escapes/-/markdown-escapes-1.0.4.tgz", + "integrity": "sha512-8z4efJYk43E0upd0NbVXwgSTQs6cT3T06etieCMEg7dRbzCbxUCK/GHlX8mhHRDcp+OLlHkPKsvqQTCvsRl2cg==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/mdast-util-compact": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/mdast-util-compact/-/mdast-util-compact-2.0.1.tgz", + "integrity": "sha512-7GlnT24gEwDrdAwEHrU4Vv5lLWrEer4KOkAiKT9nYstsTad7Oc1TwqT2zIMKRdZF7cTuaf+GA1E4Kv7jJh8mPA==", + "dependencies": { + "unist-util-visit": "^2.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/mdast-util-definitions": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/mdast-util-definitions/-/mdast-util-definitions-4.0.0.tgz", + "integrity": "sha512-k8AJ6aNnUkB7IE+5azR9h81O5EQ/cTDXtWdMq9Kk5KcEW/8ritU5CeLg/9HhOC++nALHBlaogJ5jz0Ybk3kPMQ==", + "dependencies": { + "unist-util-visit": "^2.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/mdast-util-to-hast": { + "version": "10.2.0", + "resolved": "https://registry.npmjs.org/mdast-util-to-hast/-/mdast-util-to-hast-10.2.0.tgz", + "integrity": "sha512-JoPBfJ3gBnHZ18icCwHR50orC9kNH81tiR1gs01D8Q5YpV6adHNO9nKNuFBCJQ941/32PT1a63UF/DitmS3amQ==", + "dependencies": { + "@types/mdast": "^3.0.0", + "@types/unist": "^2.0.0", + "mdast-util-definitions": "^4.0.0", + "mdurl": "^1.0.0", + "unist-builder": "^2.0.0", + "unist-util-generated": "^1.0.0", + "unist-util-position": "^3.0.0", + "unist-util-visit": "^2.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/mdast-util-to-hast/node_modules/@types/mdast": { + "version": "3.0.15", + "resolved": "https://registry.npmjs.org/@types/mdast/-/mdast-3.0.15.tgz", + "integrity": "sha512-LnwD+mUEfxWMa1QpDraczIn6k0Ee3SMicuYSSzS6ZYl2gKS09EClnJYGd8Du6rfc5r/GZEk5o1mRb8TaTj03sQ==", + "dependencies": { + "@types/unist": "^2" + } + }, + "node_modules/mdast-util-to-hast/node_modules/@types/unist": { + "version": "2.0.10", + "resolved": "https://registry.npmjs.org/@types/unist/-/unist-2.0.10.tgz", + "integrity": "sha512-IfYcSBWE3hLpBg8+X2SEa8LVkJdJEkT2Ese2aaLs3ptGdVtABxndrMaxuFlQ1qdFf9Q5rDvDpxI3WwgvKFAsQA==" + }, + "node_modules/mdast-util-to-string": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/mdast-util-to-string/-/mdast-util-to-string-1.1.0.tgz", + "integrity": "sha512-jVU0Nr2B9X3MU4tSK7JP1CMkSvOj7X5l/GboG1tKRw52lLF1x2Ju92Ms9tNetCcbfX3hzlM73zYo2NKkWSfF/A==", + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/mdurl": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/mdurl/-/mdurl-1.0.1.tgz", + "integrity": "sha512-/sKlQJCBYVY9Ers9hqzKou4H6V5UWc/M59TH2dvkt+84itfnq7uFOMLpOiOS4ujvHP4etln18fmIxA5R5fll0g==" + }, + "node_modules/merge2": { + "version": "1.4.1", + "resolved": "https://registry.npmjs.org/merge2/-/merge2-1.4.1.tgz", + "integrity": "sha512-8q7VEgMJW4J8tcfVPy8g09NcQwZdbwFEqhe/WZkoIzjn/3TGDwtOCYtXGxA3O8tPzpczCCDgv+P2P5y00ZJOOg==", + "engines": { + "node": ">= 8" + } + }, + "node_modules/micromatch": { + "version": "4.0.7", + "resolved": "https://registry.npmjs.org/micromatch/-/micromatch-4.0.7.tgz", + "integrity": "sha512-LPP/3KorzCwBxfeUuZmaR6bG2kdeHSbe0P2tY3FLRU4vYrjYz5hI4QZwV0njUx3jeuKe67YukQ1LSPZBKDqO/Q==", + "dependencies": { + "braces": "^3.0.3", + "picomatch": "^2.3.1" + }, + "engines": { + "node": ">=8.6" + } + }, + "node_modules/ms": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/ms/-/ms-2.1.2.tgz", + "integrity": "sha512-sGkPx+VjMtmA6MX27oA4FBFELFCZZ4S4XqeGOXCv68tT+jb3vk/RyaKWP0PTKyWtmLSM0b+adUTEvbs1PEaH2w==" + }, + "node_modules/once": { + "version": "1.4.0", + "resolved": "https://registry.npmjs.org/once/-/once-1.4.0.tgz", + "integrity": "sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w==", + "dependencies": { + "wrappy": "1" + } + }, + "node_modules/parse-entities": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/parse-entities/-/parse-entities-2.0.0.tgz", + "integrity": "sha512-kkywGpCcRYhqQIchaWqZ875wzpS/bMKhz5HnN3p7wveJTkTtyAB/AlnS0f8DFSqYW1T82t6yEAkEcB+A1I3MbQ==", + "dependencies": { + "character-entities": "^1.0.0", + "character-entities-legacy": "^1.0.0", + "character-reference-invalid": "^1.0.0", + "is-alphanumerical": "^1.0.0", + "is-decimal": "^1.0.0", + "is-hexadecimal": "^1.0.0" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/parse-entities/node_modules/character-entities": { + "version": "1.2.4", + "resolved": "https://registry.npmjs.org/character-entities/-/character-entities-1.2.4.tgz", + "integrity": "sha512-iBMyeEHxfVnIakwOuDXpVkc54HijNgCyQB2w0VfGQThle6NXn50zU6V/u+LDhxHcDUPojn6Kpga3PTAD8W1bQw==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/parse5": { + "version": "6.0.1", + "resolved": "https://registry.npmjs.org/parse5/-/parse5-6.0.1.tgz", + "integrity": "sha512-Ofn/CTFzRGTTxwpNEs9PP93gXShHcTq255nzRYSKe8AkVpZY7e1fpmTfOyoIvjP5HG7Z2ZM7VS9PPhQGW2pOpw==" + }, + "node_modules/path-parse": { + "version": "1.0.7", + "resolved": "https://registry.npmjs.org/path-parse/-/path-parse-1.0.7.tgz", + "integrity": "sha512-LDJzPVEEEPR+y48z93A0Ed0yXb8pAByGWo/k5YYdYgpY2/2EsOsksJrq7lOHxryrVOn1ejG6oAp8ahvOIQD8sw==" + }, + "node_modules/picocolors": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/picocolors/-/picocolors-1.0.1.tgz", + "integrity": "sha512-anP1Z8qwhkbmu7MFP5iTt+wQKXgwzf7zTyGlcdzabySa9vd0Xt392U0rVmz9poOaBj0uHJKyyo9/upk0HrEQew==" + }, + "node_modules/picomatch": { + "version": "2.3.1", + "resolved": "https://registry.npmjs.org/picomatch/-/picomatch-2.3.1.tgz", + "integrity": "sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA==", + "engines": { + "node": ">=8.6" + }, + "funding": { + "url": "https://github.com/sponsors/jonschlinkert" + } + }, + "node_modules/property-information": { + "version": "5.6.0", + "resolved": "https://registry.npmjs.org/property-information/-/property-information-5.6.0.tgz", + "integrity": "sha512-YUHSPk+A30YPv+0Qf8i9Mbfe/C0hdPXk1s1jPVToV8pk8BQtpw10ct89Eo7OWkutrwqvT0eicAxlOg3dOAu8JA==", + "dependencies": { + "xtend": "^4.0.0" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/queue-microtask": { + "version": "1.2.3", + "resolved": "https://registry.npmjs.org/queue-microtask/-/queue-microtask-1.2.3.tgz", + "integrity": "sha512-NuaNSa6flKT5JaSYQzJok04JzTL1CA6aGhv5rfLW3PgqA+M2ChpZQnAC8h8i4ZFkBS8X5RqkDBHA7r4hej3K9A==", + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/feross" + }, + { + "type": "patreon", + "url": "https://www.patreon.com/feross" + }, + { + "type": "consulting", + "url": "https://feross.org/support" + } + ] + }, + "node_modules/rehype-parse": { + "version": "7.0.1", + "resolved": "https://registry.npmjs.org/rehype-parse/-/rehype-parse-7.0.1.tgz", + "integrity": "sha512-fOiR9a9xH+Le19i4fGzIEowAbwG7idy2Jzs4mOrFWBSJ0sNUgy0ev871dwWnbOo371SjgjG4pwzrbgSVrKxecw==", + "dependencies": { + "hast-util-from-parse5": "^6.0.0", + "parse5": "^6.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/rehype-stringify": { + "version": "8.0.0", + "resolved": "https://registry.npmjs.org/rehype-stringify/-/rehype-stringify-8.0.0.tgz", + "integrity": "sha512-VkIs18G0pj2xklyllrPSvdShAV36Ff3yE5PUO9u36f6+2qJFnn22Z5gKwBOwgXviux4UC7K+/j13AnZfPICi/g==", + "dependencies": { + "hast-util-to-html": "^7.1.1" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/remark-admonitions": { + "version": "1.4.2", + "resolved": "git+ssh://git@github.com/josh-heyer/remark-admonitions.git#c57f82f3c5f21eeaf3c045ca111d294f13caab71", + "dependencies": { + "rehype-parse": "^6.0.2 || ^7.0.1", + "unified": "^9.2.2", + "unist-util-visit": "^2.0.3" + } + }, + "node_modules/remark-frontmatter": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/remark-frontmatter/-/remark-frontmatter-2.0.0.tgz", + "integrity": "sha512-uNOQt4tO14qBFWXenF0MLC4cqo3dv8qiHPGyjCl1rwOT0LomSHpcElbjjVh5CwzElInB38HD8aSRVugKQjeyHA==", + "dependencies": { + "fault": "^1.0.1" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/remark-mdx": { + "version": "1.6.22", + "resolved": "https://registry.npmjs.org/remark-mdx/-/remark-mdx-1.6.22.tgz", + "integrity": "sha512-phMHBJgeV76uyFkH4rvzCftLfKCr2RZuF+/gmVcaKrpsihyzmhXjA0BEMDaPTXG5y8qZOKPVo83NAOX01LPnOQ==", + "dependencies": { + "@babel/core": "7.12.9", + "@babel/helper-plugin-utils": "7.10.4", + "@babel/plugin-proposal-object-rest-spread": "7.12.1", + "@babel/plugin-syntax-jsx": "7.12.1", + "@mdx-js/util": "1.6.22", + "is-alphabetical": "1.0.4", + "remark-parse": "8.0.3", + "unified": "9.2.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/remark-mdx/node_modules/unified": { + "version": "9.2.0", + "resolved": "https://registry.npmjs.org/unified/-/unified-9.2.0.tgz", + "integrity": "sha512-vx2Z0vY+a3YoTj8+pttM3tiJHCwY5UFbYdiWrwBEbHmK8pvsPj2rtAX2BFfgXen8T39CJWblWRDT4L5WGXtDdg==", + "dependencies": { + "bail": "^1.0.0", + "extend": "^3.0.0", + "is-buffer": "^2.0.0", + "is-plain-obj": "^2.0.0", + "trough": "^1.0.0", + "vfile": "^4.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/remark-parse": { + "version": "8.0.3", + "resolved": "https://registry.npmjs.org/remark-parse/-/remark-parse-8.0.3.tgz", + "integrity": "sha512-E1K9+QLGgggHxCQtLt++uXltxEprmWzNfg+MxpfHsZlrddKzZ/hZyWHDbK3/Ap8HJQqYJRXP+jHczdL6q6i85Q==", + "dependencies": { + "ccount": "^1.0.0", + "collapse-white-space": "^1.0.2", + "is-alphabetical": "^1.0.0", + "is-decimal": "^1.0.0", + "is-whitespace-character": "^1.0.0", + "is-word-character": "^1.0.0", + "markdown-escapes": "^1.0.0", + "parse-entities": "^2.0.0", + "repeat-string": "^1.5.4", + "state-toggle": "^1.0.0", + "trim": "0.0.1", + "trim-trailing-lines": "^1.0.0", + "unherit": "^1.0.4", + "unist-util-remove-position": "^2.0.0", + "vfile-location": "^3.0.0", + "xtend": "^4.0.1" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/remark-rehype": { + "version": "8.1.0", + "resolved": "https://registry.npmjs.org/remark-rehype/-/remark-rehype-8.1.0.tgz", + "integrity": "sha512-EbCu9kHgAxKmW1yEYjx3QafMyGY3q8noUbNUI5xyKbaFP89wbhDrKxyIQNukNYthzjNHZu6J7hwFg7hRm1svYA==", + "dependencies": { + "mdast-util-to-hast": "^10.2.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/remark-stringify": { + "version": "8.1.1", + "resolved": "https://registry.npmjs.org/remark-stringify/-/remark-stringify-8.1.1.tgz", + "integrity": "sha512-q4EyPZT3PcA3Eq7vPpT6bIdokXzFGp9i85igjmhRyXWmPs0Y6/d2FYwUNotKAWyLch7g0ASZJn/KHHcHZQ163A==", + "dependencies": { + "ccount": "^1.0.0", + "is-alphanumeric": "^1.0.0", + "is-decimal": "^1.0.0", + "is-whitespace-character": "^1.0.0", + "longest-streak": "^2.0.1", + "markdown-escapes": "^1.0.0", + "markdown-table": "^2.0.0", + "mdast-util-compact": "^2.0.0", + "parse-entities": "^2.0.0", + "repeat-string": "^1.5.4", + "state-toggle": "^1.0.0", + "stringify-entities": "^3.0.0", + "unherit": "^1.0.4", + "xtend": "^4.0.1" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/remark-stringify/node_modules/longest-streak": { + "version": "2.0.4", + "resolved": "https://registry.npmjs.org/longest-streak/-/longest-streak-2.0.4.tgz", + "integrity": "sha512-vM6rUVCVUJJt33bnmHiZEvr7wPT78ztX7rojL+LW51bHtLh6HTjx84LA5W4+oa6aKEJA7jJu5LR6vQRBpA5DVg==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/remark-stringify/node_modules/markdown-table": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/markdown-table/-/markdown-table-2.0.0.tgz", + "integrity": "sha512-Ezda85ToJUBhM6WGaG6veasyym+Tbs3cMAw/ZhOPqXiYsr0jgocBV3j3nx+4lk47plLlIqjwuTm/ywVI+zjJ/A==", + "dependencies": { + "repeat-string": "^1.0.0" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/repeat-string": { + "version": "1.6.1", + "resolved": "https://registry.npmjs.org/repeat-string/-/repeat-string-1.6.1.tgz", + "integrity": "sha512-PV0dzCYDNfRi1jCDbJzpW7jNNDRuCOG/jI5ctQcGKt/clZD+YcPS3yIlWuTJMmESC8aevCFmWJy5wjAFgNqN6w==", + "engines": { + "node": ">=0.10" + } + }, + "node_modules/resolve": { + "version": "1.22.8", + "resolved": "https://registry.npmjs.org/resolve/-/resolve-1.22.8.tgz", + "integrity": "sha512-oKWePCxqpd6FlLvGV1VU0x7bkPmmCNolxzjMf4NczoDnQcIWrAF+cPtZn5i6n+RfD2d9i0tzpKnG6Yk168yIyw==", + "dependencies": { + "is-core-module": "^2.13.0", + "path-parse": "^1.0.7", + "supports-preserve-symlinks-flag": "^1.0.0" + }, + "bin": { + "resolve": "bin/resolve" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/reusify": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/reusify/-/reusify-1.0.4.tgz", + "integrity": "sha512-U9nH88a3fc/ekCF1l0/UP1IosiuIjyTh7hBvXVMHYgVcfGvt897Xguj2UOLDeI5BG2m7/uwyaLVT6fbtCwTyzw==", + "engines": { + "iojs": ">=1.0.0", + "node": ">=0.10.0" + } + }, + "node_modules/run-parallel": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/run-parallel/-/run-parallel-1.2.0.tgz", + "integrity": "sha512-5l4VyZR86LZ/lDxZTR6jqL8AFE2S0IFLMP26AbjsLVADxHdhB/c0GUsH+y39UfCi3dzz8OlQuPmnaJOMoDHQBA==", + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/feross" + }, + { + "type": "patreon", + "url": "https://www.patreon.com/feross" + }, + { + "type": "consulting", + "url": "https://feross.org/support" + } + ], + "dependencies": { + "queue-microtask": "^1.2.2" + } + }, + "node_modules/semver": { + "version": "5.7.2", + "resolved": "https://registry.npmjs.org/semver/-/semver-5.7.2.tgz", + "integrity": "sha512-cBznnQ9KjJqU67B52RMC65CMarK2600WFnbkcaiwWq3xy/5haFJlshgnpjovMVJ+Hff49d8GEn0b87C5pDQ10g==", + "bin": { + "semver": "bin/semver" + } + }, + "node_modules/source-map": { + "version": "0.5.7", + "resolved": "https://registry.npmjs.org/source-map/-/source-map-0.5.7.tgz", + "integrity": "sha512-LbrmJOMUSdEVxIKvdcJzQC+nQhe8FUZQTXQy6+I75skNgn3OoQ0DZA8YnFa7gp8tqtL3KPf1kmo0R5DoApeSGQ==", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/space-separated-tokens": { + "version": "1.1.5", + "resolved": "https://registry.npmjs.org/space-separated-tokens/-/space-separated-tokens-1.1.5.tgz", + "integrity": "sha512-q/JSVd1Lptzhf5bkYm4ob4iWPjx0KiRe3sRFBNrVqbJkFaBm5vbbowy1mymoPNLRa52+oadOhJ+K49wsSeSjTA==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/state-toggle": { + "version": "1.0.3", + "resolved": "https://registry.npmjs.org/state-toggle/-/state-toggle-1.0.3.tgz", + "integrity": "sha512-d/5Z4/2iiCnHw6Xzghyhb+GcmF89bxwgXG60wjIiZaxnymbyOmI8Hk4VqHXiVVp6u2ysaskFfXg3ekCj4WNftQ==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/stringify-entities": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/stringify-entities/-/stringify-entities-3.1.0.tgz", + "integrity": "sha512-3FP+jGMmMV/ffZs86MoghGqAoqXAdxLrJP4GUdrDN1aIScYih5tuIO3eF4To5AJZ79KDZ8Fpdy7QJnK8SsL1Vg==", + "dependencies": { + "character-entities-html4": "^1.0.0", + "character-entities-legacy": "^1.0.0", + "xtend": "^4.0.0" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/supports-color": { + "version": "5.5.0", + "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-5.5.0.tgz", + "integrity": "sha512-QjVjwdXIt408MIiAqCX4oUKsgU2EqAGzs2Ppkm4aQYbjm+ZEWEcW4SfFNTr4uMNZma0ey4f5lgLrkB0aX0QMow==", + "dependencies": { + "has-flag": "^3.0.0" + }, + "engines": { + "node": ">=4" + } + }, + "node_modules/supports-preserve-symlinks-flag": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/supports-preserve-symlinks-flag/-/supports-preserve-symlinks-flag-1.0.0.tgz", + "integrity": "sha512-ot0WnXS9fgdkgIcePe6RHNk1WA8+muPa6cSjeR3V8K27q9BB1rTE3R1p7Hv0z1ZyAc8s6Vvv8DIyWf681MAt0w==", + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/to-fast-properties": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/to-fast-properties/-/to-fast-properties-2.0.0.tgz", + "integrity": "sha512-/OaKK0xYrs3DmxRYqL/yDc+FxFUVYhDlXMhRmv3z915w2HF1tnN1omB354j8VUGO/hbRzyD6Y3sA7v7GS/ceog==", + "engines": { + "node": ">=4" + } + }, + "node_modules/to-regex-range": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/to-regex-range/-/to-regex-range-5.0.1.tgz", + "integrity": "sha512-65P7iz6X5yEr1cwcgvQxbbIw7Uk3gOy5dIdtZ4rDveLqhrdJP+Li/Hx6tyK0NEb+2GCyneCMJiGqrADCSNk8sQ==", + "dependencies": { + "is-number": "^7.0.0" + }, + "engines": { + "node": ">=8.0" + } + }, + "node_modules/to-vfile": { + "version": "6.1.0", + "resolved": "https://registry.npmjs.org/to-vfile/-/to-vfile-6.1.0.tgz", + "integrity": "sha512-BxX8EkCxOAZe+D/ToHdDsJcVI4HqQfmw0tCkp31zf3dNP/XWIAjU4CmeuSwsSoOzOTqHPOL0KUzyZqJplkD0Qw==", + "dependencies": { + "is-buffer": "^2.0.0", + "vfile": "^4.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/trim": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/trim/-/trim-1.0.1.tgz", + "integrity": "sha512-3JVP2YVqITUisXblCDq/Bi4P9457G/sdEamInkyvCsjbTcXLXIiG7XCb4kGMFWh6JGXesS3TKxOPtrncN/xe8w==", + "deprecated": "Use String.prototype.trim() instead" + }, + "node_modules/trim-trailing-lines": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/trim-trailing-lines/-/trim-trailing-lines-1.1.4.tgz", + "integrity": "sha512-rjUWSqnfTNrjbB9NQWfPMH/xRK1deHeGsHoVfpxJ++XeYXE0d6B1En37AHfw3jtfTU7dzMzZL2jjpe8Qb5gLIQ==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/trough": { + "version": "1.0.5", + "resolved": "https://registry.npmjs.org/trough/-/trough-1.0.5.tgz", + "integrity": "sha512-rvuRbTarPXmMb79SmzEp8aqXNKcK+y0XaB298IXueQ8I2PsrATcPBCSPyK/dDNa2iWOhKlfNnOjdAOTBU/nkFA==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/tunnel": { + "version": "0.0.6", + "resolved": "https://registry.npmjs.org/tunnel/-/tunnel-0.0.6.tgz", + "integrity": "sha512-1h/Lnq9yajKY2PEbBadPXj3VxsDDu844OnaAo52UVmIzIvwwtBPIuNvkjuzBlTWpfJyUbG3ez0KSBibQkj4ojg==", + "engines": { + "node": ">=0.6.11 <=0.7.0 || >=0.7.3" + } + }, + "node_modules/undici": { + "version": "5.28.4", + "resolved": "https://registry.npmjs.org/undici/-/undici-5.28.4.tgz", + "integrity": "sha512-72RFADWFqKmUb2hmmvNODKL3p9hcB6Gt2DOQMis1SEBaV6a4MH8soBvzg+95CYhCKPFedut2JY9bMfrDl9D23g==", + "dependencies": { + "@fastify/busboy": "^2.0.0" + }, + "engines": { + "node": ">=14.0" + } + }, + "node_modules/unherit": { + "version": "1.1.3", + "resolved": "https://registry.npmjs.org/unherit/-/unherit-1.1.3.tgz", + "integrity": "sha512-Ft16BJcnapDKp0+J/rqFC3Rrk6Y/Ng4nzsC028k2jdDII/rdZ7Wd3pPT/6+vIIxRagwRc9K0IUX0Ra4fKvw+WQ==", + "dependencies": { + "inherits": "^2.0.0", + "xtend": "^4.0.0" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/unified": { + "version": "9.2.2", + "resolved": "https://registry.npmjs.org/unified/-/unified-9.2.2.tgz", + "integrity": "sha512-Sg7j110mtefBD+qunSLO1lqOEKdrwBFBrR6Qd8f4uwkhWNlbkaqwHse6e7QvD3AP/MNoJdEDLaf8OxYyoWgorQ==", + "dependencies": { + "bail": "^1.0.0", + "extend": "^3.0.0", + "is-buffer": "^2.0.0", + "is-plain-obj": "^2.0.0", + "trough": "^1.0.0", + "vfile": "^4.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/unist-builder": { + "version": "2.0.3", + "resolved": "https://registry.npmjs.org/unist-builder/-/unist-builder-2.0.3.tgz", + "integrity": "sha512-f98yt5pnlMWlzP539tPc4grGMsFaQQlP/vM396b00jngsiINumNmsY8rkXjfoi1c6QaM8nQ3vaGDuoKWbe/1Uw==", + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/unist-util-generated": { + "version": "1.1.6", + "resolved": "https://registry.npmjs.org/unist-util-generated/-/unist-util-generated-1.1.6.tgz", + "integrity": "sha512-cln2Mm1/CZzN5ttGK7vkoGw+RZ8VcUH6BtGbq98DDtRGquAAOXig1mrBQYelOwMXYS8rK+vZDyyojSjp7JX+Lg==", + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/unist-util-position": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/unist-util-position/-/unist-util-position-3.1.0.tgz", + "integrity": "sha512-w+PkwCbYSFw8vpgWD0v7zRCl1FpY3fjDSQ3/N/wNd9Ffa4gPi8+4keqt99N3XW6F99t/mUzp2xAhNmfKWp95QA==", + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/unist-util-remove-position": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/unist-util-remove-position/-/unist-util-remove-position-2.0.1.tgz", + "integrity": "sha512-fDZsLYIe2uT+oGFnuZmy73K6ZxOPG/Qcm+w7jbEjaFcJgbQ6cqjs/eSPzXhsmGpAsWPkqZM9pYjww5QTn3LHMA==", + "dependencies": { + "unist-util-visit": "^2.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/unist-util-visit": { + "version": "2.0.3", + "resolved": "https://registry.npmjs.org/unist-util-visit/-/unist-util-visit-2.0.3.tgz", + "integrity": "sha512-iJ4/RczbJMkD0712mGktuGpm/U4By4FfDonL7N/9tATGIF4imikjOuagyMY53tnZq3NP6BcmlrHhEKAfGWjh7Q==", + "dependencies": { + "@types/unist": "^2.0.0", + "unist-util-is": "^4.0.0", + "unist-util-visit-parents": "^3.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/unist-util-visit-parents": { + "version": "5.1.3", + "resolved": "https://registry.npmjs.org/unist-util-visit-parents/-/unist-util-visit-parents-5.1.3.tgz", + "integrity": "sha512-x6+y8g7wWMyQhL1iZfhIPhDAs7Xwbn9nRosDXl7qoPTSCy0yNxnKc+hWokFifWQIDGi154rdUqKvbCa4+1kLhg==", + "dependencies": { + "@types/unist": "^2.0.0", + "unist-util-is": "^5.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/unist-util-visit-parents/node_modules/@types/unist": { + "version": "2.0.10", + "resolved": "https://registry.npmjs.org/@types/unist/-/unist-2.0.10.tgz", + "integrity": "sha512-IfYcSBWE3hLpBg8+X2SEa8LVkJdJEkT2Ese2aaLs3ptGdVtABxndrMaxuFlQ1qdFf9Q5rDvDpxI3WwgvKFAsQA==" + }, + "node_modules/unist-util-visit-parents/node_modules/unist-util-is": { + "version": "5.2.1", + "resolved": "https://registry.npmjs.org/unist-util-is/-/unist-util-is-5.2.1.tgz", + "integrity": "sha512-u9njyyfEh43npf1M+yGKDGVPbY/JWEemg5nH05ncKPfi+kBbKBJoTdsogMu33uhytuLlv9y0O7GH7fEdwLdLQw==", + "dependencies": { + "@types/unist": "^2.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/unist-util-visit/node_modules/@types/unist": { + "version": "2.0.10", + "resolved": "https://registry.npmjs.org/@types/unist/-/unist-2.0.10.tgz", + "integrity": "sha512-IfYcSBWE3hLpBg8+X2SEa8LVkJdJEkT2Ese2aaLs3ptGdVtABxndrMaxuFlQ1qdFf9Q5rDvDpxI3WwgvKFAsQA==" + }, + "node_modules/unist-util-visit/node_modules/unist-util-is": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/unist-util-is/-/unist-util-is-4.1.0.tgz", + "integrity": "sha512-ZOQSsnce92GrxSqlnEEseX0gi7GH9zTJZ0p9dtu87WRb/37mMPO2Ilx1s/t9vBHrFhbgweUwb+t7cIn5dxPhZg==", + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/unist-util-visit/node_modules/unist-util-visit-parents": { + "version": "3.1.1", + "resolved": "https://registry.npmjs.org/unist-util-visit-parents/-/unist-util-visit-parents-3.1.1.tgz", + "integrity": "sha512-1KROIZWo6bcMrZEwiH2UrXDyalAa0uqzWCxCJj6lPOvTve2WkfgCytoDTPaMnodXh1WrXOq0haVYHj99ynJlsg==", + "dependencies": { + "@types/unist": "^2.0.0", + "unist-util-is": "^4.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/universal-user-agent": { + "version": "6.0.1", + "resolved": "https://registry.npmjs.org/universal-user-agent/-/universal-user-agent-6.0.1.tgz", + "integrity": "sha512-yCzhz6FN2wU1NiiQRogkTQszlQSlpWaw8SvVegAc+bDxbzHgh1vX8uIe8OYyMH6DwH+sdTJsgMl36+mSMdRJIQ==" + }, + "node_modules/uuid": { + "version": "8.3.2", + "resolved": "https://registry.npmjs.org/uuid/-/uuid-8.3.2.tgz", + "integrity": "sha512-+NYs2QeMWy+GWFOEm9xnn6HCDp0l7QBD7ml8zLUmJ+93Q5NF0NocErnwkTkXVFNiX3/fpC6afS8Dhb/gz7R7eg==", + "bin": { + "uuid": "dist/bin/uuid" + } + }, + "node_modules/vfile": { + "version": "4.2.1", + "resolved": "https://registry.npmjs.org/vfile/-/vfile-4.2.1.tgz", + "integrity": "sha512-O6AE4OskCG5S1emQ/4gl8zK586RqA3srz3nfK/Viy0UPToBc5Trp9BVFb1u0CjsKrAWwnpr4ifM/KBXPWwJbCA==", + "dependencies": { + "@types/unist": "^2.0.0", + "is-buffer": "^2.0.0", + "unist-util-stringify-position": "^2.0.0", + "vfile-message": "^2.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/vfile-location": { + "version": "3.2.0", + "resolved": "https://registry.npmjs.org/vfile-location/-/vfile-location-3.2.0.tgz", + "integrity": "sha512-aLEIZKv/oxuCDZ8lkJGhuhztf/BW4M+iHdCwglA/eWc+vtuRFJj8EtgceYFX4LRjOhCAAiNHsKGssC6onJ+jbA==", + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/vfile-message": { + "version": "2.0.4", + "resolved": "https://registry.npmjs.org/vfile-message/-/vfile-message-2.0.4.tgz", + "integrity": "sha512-DjssxRGkMvifUOJre00juHoP9DPWuzjxKuMDrhNbk2TdaYYBNMStsNhEOt3idrtI12VQYM/1+iM0KOzXi4pxwQ==", + "dependencies": { + "@types/unist": "^2.0.0", + "unist-util-stringify-position": "^2.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/vfile-message/node_modules/@types/unist": { + "version": "2.0.10", + "resolved": "https://registry.npmjs.org/@types/unist/-/unist-2.0.10.tgz", + "integrity": "sha512-IfYcSBWE3hLpBg8+X2SEa8LVkJdJEkT2Ese2aaLs3ptGdVtABxndrMaxuFlQ1qdFf9Q5rDvDpxI3WwgvKFAsQA==" + }, + "node_modules/vfile-message/node_modules/unist-util-stringify-position": { + "version": "2.0.3", + "resolved": "https://registry.npmjs.org/unist-util-stringify-position/-/unist-util-stringify-position-2.0.3.tgz", + "integrity": "sha512-3faScn5I+hy9VleOq/qNbAd6pAx7iH5jYBMS9I1HgQVijz/4mv5Bvw5iw1sC/90CODiKo81G/ps8AJrISn687g==", + "dependencies": { + "@types/unist": "^2.0.2" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/vfile/node_modules/@types/unist": { + "version": "2.0.10", + "resolved": "https://registry.npmjs.org/@types/unist/-/unist-2.0.10.tgz", + "integrity": "sha512-IfYcSBWE3hLpBg8+X2SEa8LVkJdJEkT2Ese2aaLs3ptGdVtABxndrMaxuFlQ1qdFf9Q5rDvDpxI3WwgvKFAsQA==" + }, + "node_modules/vfile/node_modules/unist-util-stringify-position": { + "version": "2.0.3", + "resolved": "https://registry.npmjs.org/unist-util-stringify-position/-/unist-util-stringify-position-2.0.3.tgz", + "integrity": "sha512-3faScn5I+hy9VleOq/qNbAd6pAx7iH5jYBMS9I1HgQVijz/4mv5Bvw5iw1sC/90CODiKo81G/ps8AJrISn687g==", + "dependencies": { + "@types/unist": "^2.0.2" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/web-namespaces": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/web-namespaces/-/web-namespaces-1.1.4.tgz", + "integrity": "sha512-wYxSGajtmoP4WxfejAPIr4l0fVh+jeMXZb08wNc0tMg6xsfZXj3cECqIK0G7ZAqUq0PP8WlMDtaOGVBTAWztNw==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/wrappy": { + "version": "1.0.2", + "resolved": "https://registry.npmjs.org/wrappy/-/wrappy-1.0.2.tgz", + "integrity": "sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ==" + }, + "node_modules/xtend": { + "version": "4.0.2", + "resolved": "https://registry.npmjs.org/xtend/-/xtend-4.0.2.tgz", + "integrity": "sha512-LKYU1iAXJXUgAXn9URjiu+MWhyUXHsvfp7mcuYm9dSUKK0/CjtrUwFAxD82/mCWbtLsGjFIad0wIsod4zrTAEQ==", + "engines": { + "node": ">=0.4" + } + } + } +} diff --git a/tools/automation/actions/link-check/package.json b/tools/automation/actions/link-check/package.json new file mode 100644 index 00000000000..8588347ba8b --- /dev/null +++ b/tools/automation/actions/link-check/package.json @@ -0,0 +1,36 @@ +{ + "name": "docs-link-check", + "version": "1.0.0", + "description": "Check links and redirects", + "main": "index.js", + "type": "module", + "scripts": { + "test": "echo \"Error: no test specified\" && exit 1" + }, + "author": "josh heyer", + "dependencies": { + "@actions/core": "^1.10.1", + "@actions/github": "^6.0.0", + "fast-glob": "^3.2.12", + "github-slugger": "^1.5.0", + "hast-util-to-html": "^7.1.3", + "html-void-elements": "^2.0.1", + "is-absolute-url": "^3.0.3", + "js-yaml": "^4.1.0", + "mdast-util-to-string": "^1.1.0", + "rehype-parse": "^7.0.1", + "rehype-stringify": "^8.0.0", + "remark-admonitions": "github:josh-heyer/remark-admonitions", + "remark-frontmatter": "^2.0.0", + "remark-mdx": "^1.6.22", + "remark-rehype": "^8.0.0", + "remark-stringify": "^8.1.1", + "to-vfile": "^6.1.0", + "unified": "^9.2.2", + "unist-util-visit": "^2.0.3", + "unist-util-visit-parents": "^5.1.3" + }, + "overrides": { + "trim": ">=0.0.3" + } +} diff --git a/tools/user/reorg/links/README.md b/tools/user/reorg/links/README.md new file mode 100644 index 00000000000..d5164d9914e --- /dev/null +++ b/tools/user/reorg/links/README.md @@ -0,0 +1,7 @@ +# Updating links from branch renames + +This utility will examine the current branch and a base branch (origin/develop +by default), updating links for each renamed file. + +Currently this script will examine *all* renamed files and potentially touch *every* +MDX file; it's entirely on you to keep changes in your branches reasonably scoped. diff --git a/tools/user/reorg/links/lib/mdast-embedded-hast.mjs b/tools/user/reorg/links/lib/mdast-embedded-hast.mjs new file mode 100644 index 00000000000..67b7100d875 --- /dev/null +++ b/tools/user/reorg/links/lib/mdast-embedded-hast.mjs @@ -0,0 +1,195 @@ +// +// This is a collection of dirty hacks to make working with HTML embedded in Markdown a bit easier +// ...consider yourself warned +// + +import unified from "unified"; +import visit from "unist-util-visit"; +import rehypeParse from "rehype-parse"; +import hast2html from "hast-util-to-html"; +import { htmlVoidElements } from "html-void-elements"; + +export default function remarkMdxEmbeddedHast() { + const compiler = this.Compiler; + if (compiler && compiler.prototype && compiler.prototype.visitors) + attachCompiler(compiler); + return transformer; + + function transformer(tree, file) { + visit(tree, "jsx", visitor); + + function visitor(node, index, parent) { + if (/^\s* 1) return true; + // For a single child, check the position of the closing tag; if that doesn't exist, + // we can't handle it unless this specific sort of element doesn't need one + return ( + root.children[0].data?.position?.closing || + htmlVoidElements.includes(root.children[0].tagName?.toLowerCase()) + ); + } + + // ok, the other scenario that's useful to handle here is a mixture of HTML + // and Markdown. This can be inline (the 3rd man) or block content - + // AKA, a lone opening tag, hopefully with a closing tag later on + // if self-closing or a known-void, ignore for now to avoid stepping on JSX + function isOpeningTag(root, sourceHtml) { + // an opening tag has one child with no closing tag + if (root.children?.length !== 1) return false; + // gotta actually *be* an element + if (root.children[0].type !== "element") return false; + // isn't self-closing (this test may need work) + if (/<[^>]+\/>/.test(sourceHtml)) return false; + // and isn't a tag that doesn't need to close in HTML (which will probably break JSX, tbf) + if (htmlVoidElements.includes(root.children[0].tagName?.toLowerCase())) + return false; + + // of course, also shouldn't have children, and shouldn't have a closing + return ( + !root.children[0]?.children?.length && + !root.children[0]?.data?.position?.closing + ); + } + + function captureToEnd(node, index, parent, hast) { + const tagName = hast.children[0].tagName; + const valueToMatch = ``; + + let endIndex = index + 1; + while ( + endIndex < parent.children.length && + parent.children[endIndex].value !== valueToMatch + ) { + if (parent.children[endIndex].type === "jsx") + visitor(parent.children[endIndex], endIndex, parent); + ++endIndex; + } + + const end = parent.children.splice(endIndex, 1)[0]; + if (!end) return null; + + let replacement = { + type: "jsx-hast-embedded-mdast", + children: hast.children, + // this may be a bit too simplistic + block: node.position.end.line !== end.position.start.line, + }; + + replacement.children[0].children = parent.children.splice( + index + 1, + endIndex - index - 1, + ); + + return replacement; + } + } + + // rewire stringify to work with the crazy crap we did above + // this will ALL need to be changed if we upgrade to 9.0.0+ + function attachCompiler(compiler) { + const proto = compiler.prototype; + const opts = { + allowDangerousHtml: true, + allowDangerousCharacters: true, + closeSelfClosing: true, + entities: { useNamedReferences: true }, + }; + + proto.visitors = Object.assign({}, proto.visitors, { + "jsx-hast": hast, + "jsx-hast-embedded-mdast": hastMdast, + }); + + function hast(node) { + // if nothing was parsed out, there's no point in trying to recreate it; just use what was there + if (!node.children) { + return (node.value || "").trim(); + } + + var newHtml = node.children.map((n) => hast2html(n, opts)).join(""); + var hastCompHtml = unified() + .use(rehypeParse, { + emitParseErrors: true, + verbose: true, + fragment: true, + }) + .parse(node.value || "") + .children.map((n) => hast2html(n, opts)) + .join(""); + + // if logically unchanged, write the original: too easy to screw this up otherwise + if (newHtml === hastCompHtml) return (node.value || "").trim(); + + // this really only works for html right now, so escape stuff that would be interpreted as jsx + newHtml = newHtml.replace(/[{]/g, "{"); + + return newHtml; + } + + function hastMdast(node) { + let content = ""; + + if (node.block) { + content = this.block(node.children[0]).replace(/^\n*|\n*$/g, ""); + if (content.length) content = "\n" + content + "\n"; + content = "\n" + content + "\n"; + } else { + content = this.all(node.children[0]).join(""); + } + + const endTag = ``; + const mdastChildren = node.children[0].children; + + node.children[0].children = []; + let container = hast2html(node.children[0], opts); + node.children[0].children = mdastChildren; + + return container.replace(endTag, content + endTag); + } + } + + function offsetPosition(node, offsetPoint) + { + visit(node, (child) => { + if (!child.position) return; + if (child.position.start?.line) child.position.start.line += offsetPoint.line - 1; + if (child.position.start?.column) child.position.start.column += offsetPoint.column - 1; + if (child.position.start?.offset) child.position.start.offset += offsetPoint.offset; + if (child.position.end?.line) child.position.end.line += offsetPoint.line - 1; + if (child.position.end?.column) child.position.end.column += offsetPoint.column - 1; + if (child.position.end?.offset) child.position.end.offset += offsetPoint.offset; + }); + } +} diff --git a/tools/user/reorg/links/package-lock.json b/tools/user/reorg/links/package-lock.json new file mode 100644 index 00000000000..fc19c9b16ba --- /dev/null +++ b/tools/user/reorg/links/package-lock.json @@ -0,0 +1,1713 @@ +{ + "name": "links", + "version": "1.0.0", + "lockfileVersion": 3, + "requires": true, + "packages": { + "": { + "name": "links", + "version": "1.0.0", + "license": "UNLICENSED", + "dependencies": { + "fast-glob": "^3.2.12", + "github-slugger": "^1.5.0", + "hast-util-to-html": "^7.1.3", + "html-void-elements": "^2.0.1", + "is-absolute-url": "^3.0.3", + "mdast-util-to-string": "^1.1.0", + "rehype-parse": "^7.0.1", + "rehype-stringify": "^8.0.0", + "remark-admonitions": "github:josh-heyer/remark-admonitions", + "remark-frontmatter": "^2.0.0", + "remark-mdx": "^1.6.22", + "remark-rehype": "^8.0.0", + "remark-stringify": "^8.1.1", + "to-vfile": "^6.1.0", + "unified": "^9.2.2", + "unist-util-visit": "^2.0.3", + "unist-util-visit-parents": "^5.1.3", + "yaml": "^2.3.1" + } + }, + "node_modules/@babel/code-frame": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/code-frame/-/code-frame-7.24.7.tgz", + "integrity": "sha512-BcYH1CVJBO9tvyIZ2jVeXgSIMvGZ2FDRvDdOIVQyuklNKSsx+eppDEBq/g47Ayw+RqNFE+URvOShmf+f/qwAlA==", + "dependencies": { + "@babel/highlight": "^7.24.7", + "picocolors": "^1.0.0" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/core": { + "version": "7.12.9", + "resolved": "https://registry.npmjs.org/@babel/core/-/core-7.12.9.tgz", + "integrity": "sha512-gTXYh3M5wb7FRXQy+FErKFAv90BnlOuNn1QkCK2lREoPAjrQCO49+HVSrFoe5uakFAF5eenS75KbO2vQiLrTMQ==", + "dependencies": { + "@babel/code-frame": "^7.10.4", + "@babel/generator": "^7.12.5", + "@babel/helper-module-transforms": "^7.12.1", + "@babel/helpers": "^7.12.5", + "@babel/parser": "^7.12.7", + "@babel/template": "^7.12.7", + "@babel/traverse": "^7.12.9", + "@babel/types": "^7.12.7", + "convert-source-map": "^1.7.0", + "debug": "^4.1.0", + "gensync": "^1.0.0-beta.1", + "json5": "^2.1.2", + "lodash": "^4.17.19", + "resolve": "^1.3.2", + "semver": "^5.4.1", + "source-map": "^0.5.0" + }, + "engines": { + "node": ">=6.9.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/babel" + } + }, + "node_modules/@babel/generator": { + "version": "7.24.10", + "resolved": "https://registry.npmjs.org/@babel/generator/-/generator-7.24.10.tgz", + "integrity": "sha512-o9HBZL1G2129luEUlG1hB4N/nlYNWHnpwlND9eOMclRqqu1YDy2sSYVCFUZwl8I1Gxh+QSRrP2vD7EpUmFVXxg==", + "dependencies": { + "@babel/types": "^7.24.9", + "@jridgewell/gen-mapping": "^0.3.5", + "@jridgewell/trace-mapping": "^0.3.25", + "jsesc": "^2.5.1" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/helper-environment-visitor": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/helper-environment-visitor/-/helper-environment-visitor-7.24.7.tgz", + "integrity": "sha512-DoiN84+4Gnd0ncbBOM9AZENV4a5ZiL39HYMyZJGZ/AZEykHYdJw0wW3kdcsh9/Kn+BRXHLkkklZ51ecPKmI1CQ==", + "dependencies": { + "@babel/types": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/helper-function-name": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/helper-function-name/-/helper-function-name-7.24.7.tgz", + "integrity": "sha512-FyoJTsj/PEUWu1/TYRiXTIHc8lbw+TDYkZuoE43opPS5TrI7MyONBE1oNvfguEXAD9yhQRrVBnXdXzSLQl9XnA==", + "dependencies": { + "@babel/template": "^7.24.7", + "@babel/types": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/helper-hoist-variables": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/helper-hoist-variables/-/helper-hoist-variables-7.24.7.tgz", + "integrity": "sha512-MJJwhkoGy5c4ehfoRyrJ/owKeMl19U54h27YYftT0o2teQ3FJ3nQUf/I3LlJsX4l3qlw7WRXUmiyajvHXoTubQ==", + "dependencies": { + "@babel/types": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/helper-module-imports": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/helper-module-imports/-/helper-module-imports-7.24.7.tgz", + "integrity": "sha512-8AyH3C+74cgCVVXow/myrynrAGv+nTVg5vKu2nZph9x7RcRwzmh0VFallJuFTZ9mx6u4eSdXZfcOzSqTUm0HCA==", + "dependencies": { + "@babel/traverse": "^7.24.7", + "@babel/types": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/helper-module-transforms": { + "version": "7.24.9", + "resolved": "https://registry.npmjs.org/@babel/helper-module-transforms/-/helper-module-transforms-7.24.9.tgz", + "integrity": "sha512-oYbh+rtFKj/HwBQkFlUzvcybzklmVdVV3UU+mN7n2t/q3yGHbuVdNxyFvSBO1tfvjyArpHNcWMAzsSPdyI46hw==", + "dependencies": { + "@babel/helper-environment-visitor": "^7.24.7", + "@babel/helper-module-imports": "^7.24.7", + "@babel/helper-simple-access": "^7.24.7", + "@babel/helper-split-export-declaration": "^7.24.7", + "@babel/helper-validator-identifier": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0" + } + }, + "node_modules/@babel/helper-plugin-utils": { + "version": "7.10.4", + "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.10.4.tgz", + "integrity": "sha512-O4KCvQA6lLiMU9l2eawBPMf1xPP8xPfB3iEQw150hOVTqj/rfXz0ThTb4HEzqQfs2Bmo5Ay8BzxfzVtBrr9dVg==" + }, + "node_modules/@babel/helper-simple-access": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/helper-simple-access/-/helper-simple-access-7.24.7.tgz", + "integrity": "sha512-zBAIvbCMh5Ts+b86r/CjU+4XGYIs+R1j951gxI3KmmxBMhCg4oQMsv6ZXQ64XOm/cvzfU1FmoCyt6+owc5QMYg==", + "dependencies": { + "@babel/traverse": "^7.24.7", + "@babel/types": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/helper-split-export-declaration": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/helper-split-export-declaration/-/helper-split-export-declaration-7.24.7.tgz", + "integrity": "sha512-oy5V7pD+UvfkEATUKvIjvIAH/xCzfsFVw7ygW2SI6NClZzquT+mwdTfgfdbUiceh6iQO0CHtCPsyze/MZ2YbAA==", + "dependencies": { + "@babel/types": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/helper-string-parser": { + "version": "7.24.8", + "resolved": "https://registry.npmjs.org/@babel/helper-string-parser/-/helper-string-parser-7.24.8.tgz", + "integrity": "sha512-pO9KhhRcuUyGnJWwyEgnRJTSIZHiT+vMD0kPeD+so0l7mxkMT19g3pjY9GTnHySck/hDzq+dtW/4VgnMkippsQ==", + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/helper-validator-identifier": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/helper-validator-identifier/-/helper-validator-identifier-7.24.7.tgz", + "integrity": "sha512-rR+PBcQ1SMQDDyF6X0wxtG8QyLCgUB0eRAGguqRLfkCA87l7yAP7ehq8SNj96OOGTO8OBV70KhuFYcIkHXOg0w==", + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/helpers": { + "version": "7.24.8", + "resolved": "https://registry.npmjs.org/@babel/helpers/-/helpers-7.24.8.tgz", + "integrity": "sha512-gV2265Nkcz7weJJfvDoAEVzC1e2OTDpkGbEsebse8koXUJUXPsCMi7sRo/+SPMuMZ9MtUPnGwITTnQnU5YjyaQ==", + "dependencies": { + "@babel/template": "^7.24.7", + "@babel/types": "^7.24.8" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/highlight": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/highlight/-/highlight-7.24.7.tgz", + "integrity": "sha512-EStJpq4OuY8xYfhGVXngigBJRWxftKX9ksiGDnmlY3o7B/V7KIAc9X4oiK87uPJSc/vs5L869bem5fhZa8caZw==", + "dependencies": { + "@babel/helper-validator-identifier": "^7.24.7", + "chalk": "^2.4.2", + "js-tokens": "^4.0.0", + "picocolors": "^1.0.0" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/parser": { + "version": "7.24.8", + "resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.24.8.tgz", + "integrity": "sha512-WzfbgXOkGzZiXXCqk43kKwZjzwx4oulxZi3nq2TYL9mOjQv6kYwul9mz6ID36njuL7Xkp6nJEfok848Zj10j/w==", + "bin": { + "parser": "bin/babel-parser.js" + }, + "engines": { + "node": ">=6.0.0" + } + }, + "node_modules/@babel/plugin-proposal-object-rest-spread": { + "version": "7.12.1", + "resolved": "https://registry.npmjs.org/@babel/plugin-proposal-object-rest-spread/-/plugin-proposal-object-rest-spread-7.12.1.tgz", + "integrity": "sha512-s6SowJIjzlhx8o7lsFx5zmY4At6CTtDvgNQDdPzkBQucle58A6b/TTeEBYtyDgmcXjUTM+vE8YOGHZzzbc/ioA==", + "deprecated": "This proposal has been merged to the ECMAScript standard and thus this plugin is no longer maintained. Please use @babel/plugin-transform-object-rest-spread instead.", + "dependencies": { + "@babel/helper-plugin-utils": "^7.10.4", + "@babel/plugin-syntax-object-rest-spread": "^7.8.0", + "@babel/plugin-transform-parameters": "^7.12.1" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-syntax-jsx": { + "version": "7.12.1", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-jsx/-/plugin-syntax-jsx-7.12.1.tgz", + "integrity": "sha512-1yRi7yAtB0ETgxdY9ti/p2TivUxJkTdhu/ZbF9MshVGqOx1TdB3b7xCXs49Fupgg50N45KcAsRP/ZqWjs9SRjg==", + "dependencies": { + "@babel/helper-plugin-utils": "^7.10.4" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-syntax-object-rest-spread": { + "version": "7.8.3", + "resolved": "https://registry.npmjs.org/@babel/plugin-syntax-object-rest-spread/-/plugin-syntax-object-rest-spread-7.8.3.tgz", + "integrity": "sha512-XoqMijGZb9y3y2XskN+P1wUGiVwWZ5JmoDRwx5+3GmEplNyVM2s2Dg8ILFQm8rWM48orGy5YpI5Bl8U1y7ydlA==", + "dependencies": { + "@babel/helper-plugin-utils": "^7.8.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-parameters": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/plugin-transform-parameters/-/plugin-transform-parameters-7.24.7.tgz", + "integrity": "sha512-yGWW5Rr+sQOhK0Ot8hjDJuxU3XLRQGflvT4lhlSY0DFvdb3TwKaY26CJzHtYllU0vT9j58hc37ndFPsqT1SrzA==", + "dependencies": { + "@babel/helper-plugin-utils": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + }, + "peerDependencies": { + "@babel/core": "^7.0.0-0" + } + }, + "node_modules/@babel/plugin-transform-parameters/node_modules/@babel/helper-plugin-utils": { + "version": "7.24.8", + "resolved": "https://registry.npmjs.org/@babel/helper-plugin-utils/-/helper-plugin-utils-7.24.8.tgz", + "integrity": "sha512-FFWx5142D8h2Mgr/iPVGH5G7w6jDn4jUSpZTyDnQO0Yn7Ks2Kuz6Pci8H6MPCoUJegd/UZQ3tAvfLCxQSnWWwg==", + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/template": { + "version": "7.24.7", + "resolved": "https://registry.npmjs.org/@babel/template/-/template-7.24.7.tgz", + "integrity": "sha512-jYqfPrU9JTF0PmPy1tLYHW4Mp4KlgxJD9l2nP9fD6yT/ICi554DmrWBAEYpIelzjHf1msDP3PxJIRt/nFNfBig==", + "dependencies": { + "@babel/code-frame": "^7.24.7", + "@babel/parser": "^7.24.7", + "@babel/types": "^7.24.7" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/traverse": { + "version": "7.24.8", + "resolved": "https://registry.npmjs.org/@babel/traverse/-/traverse-7.24.8.tgz", + "integrity": "sha512-t0P1xxAPzEDcEPmjprAQq19NWum4K0EQPjMwZQZbHt+GiZqvjCHjj755Weq1YRPVzBI+3zSfvScfpnuIecVFJQ==", + "dependencies": { + "@babel/code-frame": "^7.24.7", + "@babel/generator": "^7.24.8", + "@babel/helper-environment-visitor": "^7.24.7", + "@babel/helper-function-name": "^7.24.7", + "@babel/helper-hoist-variables": "^7.24.7", + "@babel/helper-split-export-declaration": "^7.24.7", + "@babel/parser": "^7.24.8", + "@babel/types": "^7.24.8", + "debug": "^4.3.1", + "globals": "^11.1.0" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@babel/types": { + "version": "7.24.9", + "resolved": "https://registry.npmjs.org/@babel/types/-/types-7.24.9.tgz", + "integrity": "sha512-xm8XrMKz0IlUdocVbYJe0Z9xEgidU7msskG8BbhnTPK/HZ2z/7FP7ykqPgrUH+C+r414mNfNWam1f2vqOjqjYQ==", + "dependencies": { + "@babel/helper-string-parser": "^7.24.8", + "@babel/helper-validator-identifier": "^7.24.7", + "to-fast-properties": "^2.0.0" + }, + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/@jridgewell/gen-mapping": { + "version": "0.3.5", + "resolved": "https://registry.npmjs.org/@jridgewell/gen-mapping/-/gen-mapping-0.3.5.tgz", + "integrity": "sha512-IzL8ZoEDIBRWEzlCcRhOaCupYyN5gdIK+Q6fbFdPDg6HqX6jpkItn7DFIpW9LQzXG6Df9sA7+OKnq0qlz/GaQg==", + "dependencies": { + "@jridgewell/set-array": "^1.2.1", + "@jridgewell/sourcemap-codec": "^1.4.10", + "@jridgewell/trace-mapping": "^0.3.24" + }, + "engines": { + "node": ">=6.0.0" + } + }, + "node_modules/@jridgewell/resolve-uri": { + "version": "3.1.2", + "resolved": "https://registry.npmjs.org/@jridgewell/resolve-uri/-/resolve-uri-3.1.2.tgz", + "integrity": "sha512-bRISgCIjP20/tbWSPWMEi54QVPRZExkuD9lJL+UIxUKtwVJA8wW1Trb1jMs1RFXo1CBTNZ/5hpC9QvmKWdopKw==", + "engines": { + "node": ">=6.0.0" + } + }, + "node_modules/@jridgewell/set-array": { + "version": "1.2.1", + "resolved": "https://registry.npmjs.org/@jridgewell/set-array/-/set-array-1.2.1.tgz", + "integrity": "sha512-R8gLRTZeyp03ymzP/6Lil/28tGeGEzhx1q2k703KGWRAI1VdvPIXdG70VJc2pAMw3NA6JKL5hhFu1sJX0Mnn/A==", + "engines": { + "node": ">=6.0.0" + } + }, + "node_modules/@jridgewell/sourcemap-codec": { + "version": "1.5.0", + "resolved": "https://registry.npmjs.org/@jridgewell/sourcemap-codec/-/sourcemap-codec-1.5.0.tgz", + "integrity": "sha512-gv3ZRaISU3fjPAgNsriBRqGWQL6quFx04YMPW/zD8XMLsU32mhCCbfbO6KZFLjvYpCZ8zyDEgqsgf+PwPaM7GQ==" + }, + "node_modules/@jridgewell/trace-mapping": { + "version": "0.3.25", + "resolved": "https://registry.npmjs.org/@jridgewell/trace-mapping/-/trace-mapping-0.3.25.tgz", + "integrity": "sha512-vNk6aEwybGtawWmy/PzwnGDOjCkLWSD2wqvjGGAgOAwCGWySYXfYoxt00IJkTF+8Lb57DwOb3Aa0o9CApepiYQ==", + "dependencies": { + "@jridgewell/resolve-uri": "^3.1.0", + "@jridgewell/sourcemap-codec": "^1.4.14" + } + }, + "node_modules/@mdx-js/util": { + "version": "1.6.22", + "resolved": "https://registry.npmjs.org/@mdx-js/util/-/util-1.6.22.tgz", + "integrity": "sha512-H1rQc1ZOHANWBvPcW+JpGwr+juXSxM8Q8YCkm3GhZd8REu1fHR3z99CErO1p9pkcfcxZnMdIZdIsXkOHY0NilA==", + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/@nodelib/fs.scandir": { + "version": "2.1.5", + "resolved": "https://registry.npmjs.org/@nodelib/fs.scandir/-/fs.scandir-2.1.5.tgz", + "integrity": "sha512-vq24Bq3ym5HEQm2NKCr3yXDwjc7vTsEThRDnkp2DK9p1uqLR+DHurm/NOTo0KG7HYHU7eppKZj3MyqYuMBf62g==", + "dependencies": { + "@nodelib/fs.stat": "2.0.5", + "run-parallel": "^1.1.9" + }, + "engines": { + "node": ">= 8" + } + }, + "node_modules/@nodelib/fs.stat": { + "version": "2.0.5", + "resolved": "https://registry.npmjs.org/@nodelib/fs.stat/-/fs.stat-2.0.5.tgz", + "integrity": "sha512-RkhPPp2zrqDAQA/2jNhnztcPAlv64XdhIp7a7454A5ovI7Bukxgt7MX7udwAu3zg1DcpPU0rz3VV1SeaqvY4+A==", + "engines": { + "node": ">= 8" + } + }, + "node_modules/@nodelib/fs.walk": { + "version": "1.2.8", + "resolved": "https://registry.npmjs.org/@nodelib/fs.walk/-/fs.walk-1.2.8.tgz", + "integrity": "sha512-oGB+UxlgWcgQkgwo8GcEGwemoTFt3FIO9ababBmaGwXIoBKZ+GTy0pP185beGg7Llih/NSHSV2XAs1lnznocSg==", + "dependencies": { + "@nodelib/fs.scandir": "2.1.5", + "fastq": "^1.6.0" + }, + "engines": { + "node": ">= 8" + } + }, + "node_modules/@types/hast": { + "version": "2.3.10", + "resolved": "https://registry.npmjs.org/@types/hast/-/hast-2.3.10.tgz", + "integrity": "sha512-McWspRw8xx8J9HurkVBfYj0xKoE25tOFlHGdx4MJ5xORQrMGZNqJhVQWaIbm6Oyla5kYOXtDiopzKRJzEOkwJw==", + "dependencies": { + "@types/unist": "^2" + } + }, + "node_modules/@types/mdast": { + "version": "3.0.15", + "resolved": "https://registry.npmjs.org/@types/mdast/-/mdast-3.0.15.tgz", + "integrity": "sha512-LnwD+mUEfxWMa1QpDraczIn6k0Ee3SMicuYSSzS6ZYl2gKS09EClnJYGd8Du6rfc5r/GZEk5o1mRb8TaTj03sQ==", + "dependencies": { + "@types/unist": "^2" + } + }, + "node_modules/@types/parse5": { + "version": "5.0.3", + "resolved": "https://registry.npmjs.org/@types/parse5/-/parse5-5.0.3.tgz", + "integrity": "sha512-kUNnecmtkunAoQ3CnjmMkzNU/gtxG8guhi+Fk2U/kOpIKjIMKnXGp4IJCgQJrXSgMsWYimYG4TGjz/UzbGEBTw==" + }, + "node_modules/@types/unist": { + "version": "2.0.10", + "resolved": "https://registry.npmjs.org/@types/unist/-/unist-2.0.10.tgz", + "integrity": "sha512-IfYcSBWE3hLpBg8+X2SEa8LVkJdJEkT2Ese2aaLs3ptGdVtABxndrMaxuFlQ1qdFf9Q5rDvDpxI3WwgvKFAsQA==" + }, + "node_modules/ansi-styles": { + "version": "3.2.1", + "resolved": "https://registry.npmjs.org/ansi-styles/-/ansi-styles-3.2.1.tgz", + "integrity": "sha512-VT0ZI6kZRdTh8YyJw3SMbYm/u+NqfsAxEpWO0Pf9sq8/e94WxxOpPKx9FR1FlyCtOVDNOQ+8ntlqFxiRc+r5qA==", + "dependencies": { + "color-convert": "^1.9.0" + }, + "engines": { + "node": ">=4" + } + }, + "node_modules/bail": { + "version": "1.0.5", + "resolved": "https://registry.npmjs.org/bail/-/bail-1.0.5.tgz", + "integrity": "sha512-xFbRxM1tahm08yHBP16MMjVUAvDaBMD38zsM9EMAUN61omwLmKlOpB/Zku5QkjZ8TZ4vn53pj+t518cH0S03RQ==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/braces": { + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/braces/-/braces-3.0.3.tgz", + "integrity": "sha512-yQbXgO/OSZVD2IsiLlro+7Hf6Q18EJrKSEsdoMzKePKXct3gvD8oLcOQdIzGupr5Fj+EDe8gO/lxc1BzfMpxvA==", + "dependencies": { + "fill-range": "^7.1.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/ccount": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/ccount/-/ccount-1.1.0.tgz", + "integrity": "sha512-vlNK021QdI7PNeiUh/lKkC/mNHHfV0m/Ad5JoI0TYtlBnJAslM/JIkm/tGC88bkLIwO6OQ5uV6ztS6kVAtCDlg==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/chalk": { + "version": "2.4.2", + "resolved": "https://registry.npmjs.org/chalk/-/chalk-2.4.2.tgz", + "integrity": "sha512-Mti+f9lpJNcwF4tWV8/OrTTtF1gZi+f8FqlyAdouralcFWFQWF2+NgCHShjkCb+IFBLq9buZwE1xckQU4peSuQ==", + "dependencies": { + "ansi-styles": "^3.2.1", + "escape-string-regexp": "^1.0.5", + "supports-color": "^5.3.0" + }, + "engines": { + "node": ">=4" + } + }, + "node_modules/character-entities": { + "version": "1.2.4", + "resolved": "https://registry.npmjs.org/character-entities/-/character-entities-1.2.4.tgz", + "integrity": "sha512-iBMyeEHxfVnIakwOuDXpVkc54HijNgCyQB2w0VfGQThle6NXn50zU6V/u+LDhxHcDUPojn6Kpga3PTAD8W1bQw==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/character-entities-html4": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/character-entities-html4/-/character-entities-html4-1.1.4.tgz", + "integrity": "sha512-HRcDxZuZqMx3/a+qrzxdBKBPUpxWEq9xw2OPZ3a/174ihfrQKVsFhqtthBInFy1zZ9GgZyFXOatNujm8M+El3g==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/character-entities-legacy": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/character-entities-legacy/-/character-entities-legacy-1.1.4.tgz", + "integrity": "sha512-3Xnr+7ZFS1uxeiUDvV02wQ+QDbc55o97tIV5zHScSPJpcLm/r0DFPcoY3tYRp+VZukxuMeKgXYmsXQHO05zQeA==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/character-reference-invalid": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/character-reference-invalid/-/character-reference-invalid-1.1.4.tgz", + "integrity": "sha512-mKKUkUbhPpQlCOfIuZkvSEgktjPFIsZKRRbC6KWVEMvlzblj3i3asQv5ODsrwt0N3pHAEvjP8KTQPHkp0+6jOg==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/collapse-white-space": { + "version": "1.0.6", + "resolved": "https://registry.npmjs.org/collapse-white-space/-/collapse-white-space-1.0.6.tgz", + "integrity": "sha512-jEovNnrhMuqyCcjfEJA56v0Xq8SkIoPKDyaHahwo3POf4qcSXqMYuwNcOTzp74vTsR9Tn08z4MxWqAhcekogkQ==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/color-convert": { + "version": "1.9.3", + "resolved": "https://registry.npmjs.org/color-convert/-/color-convert-1.9.3.tgz", + "integrity": "sha512-QfAUtd+vFdAtFQcC8CCyYt1fYWxSqAiK2cSD6zDB8N3cpsEBAvRxp9zOGg6G/SHHJYAT88/az/IuDGALsNVbGg==", + "dependencies": { + "color-name": "1.1.3" + } + }, + "node_modules/color-name": { + "version": "1.1.3", + "resolved": "https://registry.npmjs.org/color-name/-/color-name-1.1.3.tgz", + "integrity": "sha512-72fSenhMw2HZMTVHeCA9KCmpEIbzWiQsjN+BHcBbS9vr1mtt+vJjPdksIBNUmKAW8TFUDPJK5SUU3QhE9NEXDw==" + }, + "node_modules/comma-separated-tokens": { + "version": "1.0.8", + "resolved": "https://registry.npmjs.org/comma-separated-tokens/-/comma-separated-tokens-1.0.8.tgz", + "integrity": "sha512-GHuDRO12Sypu2cV70d1dkA2EUmXHgntrzbpvOB+Qy+49ypNfGgFQIC2fhhXbnyrJRynDCAARsT7Ou0M6hirpfw==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/convert-source-map": { + "version": "1.9.0", + "resolved": "https://registry.npmjs.org/convert-source-map/-/convert-source-map-1.9.0.tgz", + "integrity": "sha512-ASFBup0Mz1uyiIjANan1jzLQami9z1PoYSZCiiYW2FczPbenXc45FZdBZLzOT+r6+iciuEModtmCti+hjaAk0A==" + }, + "node_modules/debug": { + "version": "4.3.5", + "resolved": "https://registry.npmjs.org/debug/-/debug-4.3.5.tgz", + "integrity": "sha512-pt0bNEmneDIvdL1Xsd9oDQ/wrQRkXDT4AUWlNZNPKvW5x/jyO9VFXkJUP07vQ2upmw5PlaITaPKc31jK13V+jg==", + "dependencies": { + "ms": "2.1.2" + }, + "engines": { + "node": ">=6.0" + }, + "peerDependenciesMeta": { + "supports-color": { + "optional": true + } + } + }, + "node_modules/escape-string-regexp": { + "version": "1.0.5", + "resolved": "https://registry.npmjs.org/escape-string-regexp/-/escape-string-regexp-1.0.5.tgz", + "integrity": "sha512-vbRorB5FUQWvla16U8R/qgaFIya2qGzwDrNmCZuYKrbdSUMG6I1ZCGQRefkRVhuOkIGVne7BQ35DSfo1qvJqFg==", + "engines": { + "node": ">=0.8.0" + } + }, + "node_modules/extend": { + "version": "3.0.2", + "resolved": "https://registry.npmjs.org/extend/-/extend-3.0.2.tgz", + "integrity": "sha512-fjquC59cD7CyW6urNXK0FBufkZcoiGG80wTuPujX590cB5Ttln20E2UB4S/WARVqhXffZl2LNgS+gQdPIIim/g==" + }, + "node_modules/fast-glob": { + "version": "3.2.12", + "resolved": "https://registry.npmjs.org/fast-glob/-/fast-glob-3.2.12.tgz", + "integrity": "sha512-DVj4CQIYYow0BlaelwK1pHl5n5cRSJfM60UA0zK891sVInoPri2Ekj7+e1CT3/3qxXenpI+nBBmQAcJPJgaj4w==", + "dependencies": { + "@nodelib/fs.stat": "^2.0.2", + "@nodelib/fs.walk": "^1.2.3", + "glob-parent": "^5.1.2", + "merge2": "^1.3.0", + "micromatch": "^4.0.4" + }, + "engines": { + "node": ">=8.6.0" + } + }, + "node_modules/fastq": { + "version": "1.15.0", + "resolved": "https://registry.npmjs.org/fastq/-/fastq-1.15.0.tgz", + "integrity": "sha512-wBrocU2LCXXa+lWBt8RoIRD89Fi8OdABODa/kEnyeyjS5aZO5/GNvI5sEINADqP/h8M29UHTHUb53sUu5Ihqdw==", + "dependencies": { + "reusify": "^1.0.4" + } + }, + "node_modules/fault": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/fault/-/fault-1.0.4.tgz", + "integrity": "sha512-CJ0HCB5tL5fYTEA7ToAq5+kTwd++Borf1/bifxd9iT70QcXr4MRrO3Llf8Ifs70q+SJcGHFtnIE/Nw6giCtECA==", + "dependencies": { + "format": "^0.2.0" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/fill-range": { + "version": "7.1.1", + "resolved": "https://registry.npmjs.org/fill-range/-/fill-range-7.1.1.tgz", + "integrity": "sha512-YsGpe3WHLK8ZYi4tWDg2Jy3ebRz2rXowDxnld4bkQB00cc/1Zw9AWnC0i9ztDJitivtQvaI9KaLyKrc+hBW0yg==", + "dependencies": { + "to-regex-range": "^5.0.1" + }, + "engines": { + "node": ">=8" + } + }, + "node_modules/format": { + "version": "0.2.2", + "resolved": "https://registry.npmjs.org/format/-/format-0.2.2.tgz", + "integrity": "sha512-wzsgA6WOq+09wrU1tsJ09udeR/YZRaeArL9e1wPbFg3GG2yDnC2ldKpxs4xunpFF9DgqCqOIra3bc1HWrJ37Ww==", + "engines": { + "node": ">=0.4.x" + } + }, + "node_modules/function-bind": { + "version": "1.1.2", + "resolved": "https://registry.npmjs.org/function-bind/-/function-bind-1.1.2.tgz", + "integrity": "sha512-7XHNxH7qX9xG5mIwxkhumTox/MIRNcOgDrxWsMt2pAr23WHp6MrRlN7FBSFpCpr+oVO0F744iUgR82nJMfG2SA==", + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/gensync": { + "version": "1.0.0-beta.2", + "resolved": "https://registry.npmjs.org/gensync/-/gensync-1.0.0-beta.2.tgz", + "integrity": "sha512-3hN7NaskYvMDLQY55gnW3NQ+mesEAepTqlg+VEbj7zzqEMBVNhzcGYYeqFo/TlYz6eQiFcp1HcsCZO+nGgS8zg==", + "engines": { + "node": ">=6.9.0" + } + }, + "node_modules/github-slugger": { + "version": "1.5.0", + "resolved": "https://registry.npmjs.org/github-slugger/-/github-slugger-1.5.0.tgz", + "integrity": "sha512-wIh+gKBI9Nshz2o46B0B3f5k/W+WI9ZAv6y5Dn5WJ5SK1t0TnDimB4WE5rmTD05ZAIn8HALCZVmCsvj0w0v0lw==" + }, + "node_modules/glob-parent": { + "version": "5.1.2", + "resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-5.1.2.tgz", + "integrity": "sha512-AOIgSQCepiJYwP3ARnGx+5VnTu2HBYdzbGP45eLw1vr3zB3vZLeyed1sC9hnbcOc9/SrMyM5RPQrkGz4aS9Zow==", + "dependencies": { + "is-glob": "^4.0.1" + }, + "engines": { + "node": ">= 6" + } + }, + "node_modules/globals": { + "version": "11.12.0", + "resolved": "https://registry.npmjs.org/globals/-/globals-11.12.0.tgz", + "integrity": "sha512-WOBp/EEGUiIsJSp7wcv/y6MO+lV9UoncWqxuFfm8eBwzWNgyfBd6Gz+IeKQ9jCmyhoH99g15M3T+QaVHFjizVA==", + "engines": { + "node": ">=4" + } + }, + "node_modules/has-flag": { + "version": "3.0.0", + "resolved": "https://registry.npmjs.org/has-flag/-/has-flag-3.0.0.tgz", + "integrity": "sha512-sKJf1+ceQBr4SMkvQnBDNDtf4TXpVhVGateu0t918bl30FnbE2m4vNLX+VWe/dpjlb+HugGYzW7uQXH98HPEYw==", + "engines": { + "node": ">=4" + } + }, + "node_modules/hasown": { + "version": "2.0.2", + "resolved": "https://registry.npmjs.org/hasown/-/hasown-2.0.2.tgz", + "integrity": "sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ==", + "dependencies": { + "function-bind": "^1.1.2" + }, + "engines": { + "node": ">= 0.4" + } + }, + "node_modules/hast-util-from-parse5": { + "version": "6.0.1", + "resolved": "https://registry.npmjs.org/hast-util-from-parse5/-/hast-util-from-parse5-6.0.1.tgz", + "integrity": "sha512-jeJUWiN5pSxW12Rh01smtVkZgZr33wBokLzKLwinYOUfSzm1Nl/c3GUGebDyOKjdsRgMvoVbV0VpAcpjF4NrJA==", + "dependencies": { + "@types/parse5": "^5.0.0", + "hastscript": "^6.0.0", + "property-information": "^5.0.0", + "vfile": "^4.0.0", + "vfile-location": "^3.2.0", + "web-namespaces": "^1.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/hast-util-is-element": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/hast-util-is-element/-/hast-util-is-element-1.1.0.tgz", + "integrity": "sha512-oUmNua0bFbdrD/ELDSSEadRVtWZOf3iF6Lbv81naqsIV99RnSCieTbWuWCY8BAeEfKJTKl0gRdokv+dELutHGQ==", + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/hast-util-parse-selector": { + "version": "2.2.5", + "resolved": "https://registry.npmjs.org/hast-util-parse-selector/-/hast-util-parse-selector-2.2.5.tgz", + "integrity": "sha512-7j6mrk/qqkSehsM92wQjdIgWM2/BW61u/53G6xmC8i1OmEdKLHbk419QKQUjz6LglWsfqoiHmyMRkP1BGjecNQ==", + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/hast-util-to-html": { + "version": "7.1.3", + "resolved": "https://registry.npmjs.org/hast-util-to-html/-/hast-util-to-html-7.1.3.tgz", + "integrity": "sha512-yk2+1p3EJTEE9ZEUkgHsUSVhIpCsL/bvT8E5GzmWc+N1Po5gBw+0F8bo7dpxXR0nu0bQVxVZGX2lBGF21CmeDw==", + "dependencies": { + "ccount": "^1.0.0", + "comma-separated-tokens": "^1.0.0", + "hast-util-is-element": "^1.0.0", + "hast-util-whitespace": "^1.0.0", + "html-void-elements": "^1.0.0", + "property-information": "^5.0.0", + "space-separated-tokens": "^1.0.0", + "stringify-entities": "^3.0.1", + "unist-util-is": "^4.0.0", + "xtend": "^4.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/hast-util-to-html/node_modules/html-void-elements": { + "version": "1.0.5", + "resolved": "https://registry.npmjs.org/html-void-elements/-/html-void-elements-1.0.5.tgz", + "integrity": "sha512-uE/TxKuyNIcx44cIWnjr/rfIATDH7ZaOMmstu0CwhFG1Dunhlp4OC6/NMbhiwoq5BpW0ubi303qnEk/PZj614w==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/hast-util-whitespace": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/hast-util-whitespace/-/hast-util-whitespace-1.0.4.tgz", + "integrity": "sha512-I5GTdSfhYfAPNztx2xJRQpG8cuDSNt599/7YUn7Gx/WxNMsG+a835k97TDkFgk123cwjfwINaZknkKkphx/f2A==", + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/hastscript": { + "version": "6.0.0", + "resolved": "https://registry.npmjs.org/hastscript/-/hastscript-6.0.0.tgz", + "integrity": "sha512-nDM6bvd7lIqDUiYEiu5Sl/+6ReP0BMk/2f4U/Rooccxkj0P5nm+acM5PrGJ/t5I8qPGiqZSE6hVAwZEdZIvP4w==", + "dependencies": { + "@types/hast": "^2.0.0", + "comma-separated-tokens": "^1.0.0", + "hast-util-parse-selector": "^2.0.0", + "property-information": "^5.0.0", + "space-separated-tokens": "^1.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/html-void-elements": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/html-void-elements/-/html-void-elements-2.0.1.tgz", + "integrity": "sha512-0quDb7s97CfemeJAnW9wC0hw78MtW7NU3hqtCD75g2vFlDLt36llsYD7uB7SUzojLMP24N5IatXf7ylGXiGG9A==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/inherits": { + "version": "2.0.4", + "resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.4.tgz", + "integrity": "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==" + }, + "node_modules/is-absolute-url": { + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/is-absolute-url/-/is-absolute-url-3.0.3.tgz", + "integrity": "sha512-opmNIX7uFnS96NtPmhWQgQx6/NYFgsUXYMllcfzwWKUMwfo8kku1TvE6hkNcH+Q1ts5cMVrsY7j0bxXQDciu9Q==", + "engines": { + "node": ">=8" + } + }, + "node_modules/is-alphabetical": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/is-alphabetical/-/is-alphabetical-1.0.4.tgz", + "integrity": "sha512-DwzsA04LQ10FHTZuL0/grVDk4rFoVH1pjAToYwBrHSxcrBIGQuXrQMtD5U1b0U2XVgKZCTLLP8u2Qxqhy3l2Vg==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/is-alphanumeric": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/is-alphanumeric/-/is-alphanumeric-1.0.0.tgz", + "integrity": "sha512-ZmRL7++ZkcMOfDuWZuMJyIVLr2keE1o/DeNWh1EmgqGhUcV+9BIVsx0BcSBOHTZqzjs4+dISzr2KAeBEWGgXeA==", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/is-alphanumerical": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/is-alphanumerical/-/is-alphanumerical-1.0.4.tgz", + "integrity": "sha512-UzoZUr+XfVz3t3v4KyGEniVL9BDRoQtY7tOyrRybkVNjDFWyo1yhXNGrrBTQxp3ib9BLAWs7k2YKBQsFRkZG9A==", + "dependencies": { + "is-alphabetical": "^1.0.0", + "is-decimal": "^1.0.0" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/is-buffer": { + "version": "2.0.5", + "resolved": "https://registry.npmjs.org/is-buffer/-/is-buffer-2.0.5.tgz", + "integrity": "sha512-i2R6zNFDwgEHJyQUtJEk0XFi1i0dPFn/oqjK3/vPCcDeJvW5NQ83V8QbicfF1SupOaB0h8ntgBC2YiE7dfyctQ==", + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/feross" + }, + { + "type": "patreon", + "url": "https://www.patreon.com/feross" + }, + { + "type": "consulting", + "url": "https://feross.org/support" + } + ], + "engines": { + "node": ">=4" + } + }, + "node_modules/is-core-module": { + "version": "2.14.0", + "resolved": "https://registry.npmjs.org/is-core-module/-/is-core-module-2.14.0.tgz", + "integrity": "sha512-a5dFJih5ZLYlRtDc0dZWP7RiKr6xIKzmn/oAYCDvdLThadVgyJwlaoQPmRtMSpz+rk0OGAgIu+TcM9HUF0fk1A==", + "dependencies": { + "hasown": "^2.0.2" + }, + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/is-decimal": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/is-decimal/-/is-decimal-1.0.4.tgz", + "integrity": "sha512-RGdriMmQQvZ2aqaQq3awNA6dCGtKpiDFcOzrTWrDAT2MiWrKQVPmxLGHl7Y2nNu6led0kEyoX0enY0qXYsv9zw==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/is-extglob": { + "version": "2.1.1", + "resolved": "https://registry.npmjs.org/is-extglob/-/is-extglob-2.1.1.tgz", + "integrity": "sha512-SbKbANkN603Vi4jEZv49LeVJMn4yGwsbzZworEoyEiutsN3nJYdbO36zfhGJ6QEDpOZIFkDtnq5JRxmvl3jsoQ==", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/is-glob": { + "version": "4.0.3", + "resolved": "https://registry.npmjs.org/is-glob/-/is-glob-4.0.3.tgz", + "integrity": "sha512-xelSayHH36ZgE7ZWhli7pW34hNbNl8Ojv5KVmkJD4hBdD3th8Tfk9vYasLM+mXWOZhFkgZfxhLSnrwRr4elSSg==", + "dependencies": { + "is-extglob": "^2.1.1" + }, + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/is-hexadecimal": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/is-hexadecimal/-/is-hexadecimal-1.0.4.tgz", + "integrity": "sha512-gyPJuv83bHMpocVYoqof5VDiZveEoGoFL8m3BXNb2VW8Xs+rz9kqO8LOQ5DH6EsuvilT1ApazU0pyl+ytbPtlw==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/is-number": { + "version": "7.0.0", + "resolved": "https://registry.npmjs.org/is-number/-/is-number-7.0.0.tgz", + "integrity": "sha512-41Cifkg6e8TylSpdtTpeLVMqvSBEVzTttHvERD741+pnZ8ANv0004MRL43QKPDlK9cGvNp6NZWZUBlbGXYxxng==", + "engines": { + "node": ">=0.12.0" + } + }, + "node_modules/is-plain-obj": { + "version": "2.1.0", + "resolved": "https://registry.npmjs.org/is-plain-obj/-/is-plain-obj-2.1.0.tgz", + "integrity": "sha512-YWnfyRwxL/+SsrWYfOpUtz5b3YD+nyfkHvjbcanzk8zgyO4ASD67uVMRt8k5bM4lLMDnXfriRhOpemw+NfT1eA==", + "engines": { + "node": ">=8" + } + }, + "node_modules/is-whitespace-character": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/is-whitespace-character/-/is-whitespace-character-1.0.4.tgz", + "integrity": "sha512-SDweEzfIZM0SJV0EUga669UTKlmL0Pq8Lno0QDQsPnvECB3IM2aP0gdx5TrU0A01MAPfViaZiI2V1QMZLaKK5w==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/is-word-character": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/is-word-character/-/is-word-character-1.0.4.tgz", + "integrity": "sha512-5SMO8RVennx3nZrqtKwCGyyetPE9VDba5ugvKLaD4KopPG5kR4mQ7tNt/r7feL5yt5h3lpuBbIUmCOG2eSzXHA==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/js-tokens": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/js-tokens/-/js-tokens-4.0.0.tgz", + "integrity": "sha512-RdJUflcE3cUzKiMqQgsCu06FPu9UdIJO0beYbPhHN4k6apgJtifcoCtT9bcxOpYBtpD2kCM6Sbzg4CausW/PKQ==" + }, + "node_modules/jsesc": { + "version": "2.5.2", + "resolved": "https://registry.npmjs.org/jsesc/-/jsesc-2.5.2.tgz", + "integrity": "sha512-OYu7XEzjkCQ3C5Ps3QIZsQfNpqoJyZZA99wd9aWd05NCtC5pWOkShK2mkL6HXQR6/Cy2lbNdPlZBpuQHXE63gA==", + "bin": { + "jsesc": "bin/jsesc" + }, + "engines": { + "node": ">=4" + } + }, + "node_modules/json5": { + "version": "2.2.3", + "resolved": "https://registry.npmjs.org/json5/-/json5-2.2.3.tgz", + "integrity": "sha512-XmOWe7eyHYH14cLdVPoyg+GOH3rYX++KpzrylJwSW98t3Nk+U8XOl8FWKOgwtzdb8lXGf6zYwDUzeHMWfxasyg==", + "bin": { + "json5": "lib/cli.js" + }, + "engines": { + "node": ">=6" + } + }, + "node_modules/lodash": { + "version": "4.17.21", + "resolved": "https://registry.npmjs.org/lodash/-/lodash-4.17.21.tgz", + "integrity": "sha512-v2kDEe57lecTulaDIuNTPy3Ry4gLGJ6Z1O3vE1krgXZNrsQ+LFTGHVxVjcXPs17LhbZVGedAJv8XZ1tvj5FvSg==" + }, + "node_modules/longest-streak": { + "version": "2.0.4", + "resolved": "https://registry.npmjs.org/longest-streak/-/longest-streak-2.0.4.tgz", + "integrity": "sha512-vM6rUVCVUJJt33bnmHiZEvr7wPT78ztX7rojL+LW51bHtLh6HTjx84LA5W4+oa6aKEJA7jJu5LR6vQRBpA5DVg==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/markdown-escapes": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/markdown-escapes/-/markdown-escapes-1.0.4.tgz", + "integrity": "sha512-8z4efJYk43E0upd0NbVXwgSTQs6cT3T06etieCMEg7dRbzCbxUCK/GHlX8mhHRDcp+OLlHkPKsvqQTCvsRl2cg==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/markdown-table": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/markdown-table/-/markdown-table-2.0.0.tgz", + "integrity": "sha512-Ezda85ToJUBhM6WGaG6veasyym+Tbs3cMAw/ZhOPqXiYsr0jgocBV3j3nx+4lk47plLlIqjwuTm/ywVI+zjJ/A==", + "dependencies": { + "repeat-string": "^1.0.0" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/mdast-util-compact": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/mdast-util-compact/-/mdast-util-compact-2.0.1.tgz", + "integrity": "sha512-7GlnT24gEwDrdAwEHrU4Vv5lLWrEer4KOkAiKT9nYstsTad7Oc1TwqT2zIMKRdZF7cTuaf+GA1E4Kv7jJh8mPA==", + "dependencies": { + "unist-util-visit": "^2.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/mdast-util-definitions": { + "version": "4.0.0", + "resolved": "https://registry.npmjs.org/mdast-util-definitions/-/mdast-util-definitions-4.0.0.tgz", + "integrity": "sha512-k8AJ6aNnUkB7IE+5azR9h81O5EQ/cTDXtWdMq9Kk5KcEW/8ritU5CeLg/9HhOC++nALHBlaogJ5jz0Ybk3kPMQ==", + "dependencies": { + "unist-util-visit": "^2.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/mdast-util-to-hast": { + "version": "10.2.0", + "resolved": "https://registry.npmjs.org/mdast-util-to-hast/-/mdast-util-to-hast-10.2.0.tgz", + "integrity": "sha512-JoPBfJ3gBnHZ18icCwHR50orC9kNH81tiR1gs01D8Q5YpV6adHNO9nKNuFBCJQ941/32PT1a63UF/DitmS3amQ==", + "dependencies": { + "@types/mdast": "^3.0.0", + "@types/unist": "^2.0.0", + "mdast-util-definitions": "^4.0.0", + "mdurl": "^1.0.0", + "unist-builder": "^2.0.0", + "unist-util-generated": "^1.0.0", + "unist-util-position": "^3.0.0", + "unist-util-visit": "^2.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/mdast-util-to-string": { + "version": "1.1.0", + "resolved": "https://registry.npmjs.org/mdast-util-to-string/-/mdast-util-to-string-1.1.0.tgz", + "integrity": "sha512-jVU0Nr2B9X3MU4tSK7JP1CMkSvOj7X5l/GboG1tKRw52lLF1x2Ju92Ms9tNetCcbfX3hzlM73zYo2NKkWSfF/A==", + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/mdurl": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/mdurl/-/mdurl-1.0.1.tgz", + "integrity": "sha512-/sKlQJCBYVY9Ers9hqzKou4H6V5UWc/M59TH2dvkt+84itfnq7uFOMLpOiOS4ujvHP4etln18fmIxA5R5fll0g==" + }, + "node_modules/merge2": { + "version": "1.4.1", + "resolved": "https://registry.npmjs.org/merge2/-/merge2-1.4.1.tgz", + "integrity": "sha512-8q7VEgMJW4J8tcfVPy8g09NcQwZdbwFEqhe/WZkoIzjn/3TGDwtOCYtXGxA3O8tPzpczCCDgv+P2P5y00ZJOOg==", + "engines": { + "node": ">= 8" + } + }, + "node_modules/micromatch": { + "version": "4.0.5", + "resolved": "https://registry.npmjs.org/micromatch/-/micromatch-4.0.5.tgz", + "integrity": "sha512-DMy+ERcEW2q8Z2Po+WNXuw3c5YaUSFjAO5GsJqfEl7UjvtIuFKO6ZrKvcItdy98dwFI2N1tg3zNIdKaQT+aNdA==", + "dependencies": { + "braces": "^3.0.2", + "picomatch": "^2.3.1" + }, + "engines": { + "node": ">=8.6" + } + }, + "node_modules/ms": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/ms/-/ms-2.1.2.tgz", + "integrity": "sha512-sGkPx+VjMtmA6MX27oA4FBFELFCZZ4S4XqeGOXCv68tT+jb3vk/RyaKWP0PTKyWtmLSM0b+adUTEvbs1PEaH2w==" + }, + "node_modules/parse-entities": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/parse-entities/-/parse-entities-2.0.0.tgz", + "integrity": "sha512-kkywGpCcRYhqQIchaWqZ875wzpS/bMKhz5HnN3p7wveJTkTtyAB/AlnS0f8DFSqYW1T82t6yEAkEcB+A1I3MbQ==", + "dependencies": { + "character-entities": "^1.0.0", + "character-entities-legacy": "^1.0.0", + "character-reference-invalid": "^1.0.0", + "is-alphanumerical": "^1.0.0", + "is-decimal": "^1.0.0", + "is-hexadecimal": "^1.0.0" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/parse5": { + "version": "6.0.1", + "resolved": "https://registry.npmjs.org/parse5/-/parse5-6.0.1.tgz", + "integrity": "sha512-Ofn/CTFzRGTTxwpNEs9PP93gXShHcTq255nzRYSKe8AkVpZY7e1fpmTfOyoIvjP5HG7Z2ZM7VS9PPhQGW2pOpw==" + }, + "node_modules/path-parse": { + "version": "1.0.7", + "resolved": "https://registry.npmjs.org/path-parse/-/path-parse-1.0.7.tgz", + "integrity": "sha512-LDJzPVEEEPR+y48z93A0Ed0yXb8pAByGWo/k5YYdYgpY2/2EsOsksJrq7lOHxryrVOn1ejG6oAp8ahvOIQD8sw==" + }, + "node_modules/picocolors": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/picocolors/-/picocolors-1.0.1.tgz", + "integrity": "sha512-anP1Z8qwhkbmu7MFP5iTt+wQKXgwzf7zTyGlcdzabySa9vd0Xt392U0rVmz9poOaBj0uHJKyyo9/upk0HrEQew==" + }, + "node_modules/picomatch": { + "version": "2.3.1", + "resolved": "https://registry.npmjs.org/picomatch/-/picomatch-2.3.1.tgz", + "integrity": "sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA==", + "engines": { + "node": ">=8.6" + }, + "funding": { + "url": "https://github.com/sponsors/jonschlinkert" + } + }, + "node_modules/property-information": { + "version": "5.6.0", + "resolved": "https://registry.npmjs.org/property-information/-/property-information-5.6.0.tgz", + "integrity": "sha512-YUHSPk+A30YPv+0Qf8i9Mbfe/C0hdPXk1s1jPVToV8pk8BQtpw10ct89Eo7OWkutrwqvT0eicAxlOg3dOAu8JA==", + "dependencies": { + "xtend": "^4.0.0" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/queue-microtask": { + "version": "1.2.3", + "resolved": "https://registry.npmjs.org/queue-microtask/-/queue-microtask-1.2.3.tgz", + "integrity": "sha512-NuaNSa6flKT5JaSYQzJok04JzTL1CA6aGhv5rfLW3PgqA+M2ChpZQnAC8h8i4ZFkBS8X5RqkDBHA7r4hej3K9A==", + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/feross" + }, + { + "type": "patreon", + "url": "https://www.patreon.com/feross" + }, + { + "type": "consulting", + "url": "https://feross.org/support" + } + ] + }, + "node_modules/rehype-parse": { + "version": "7.0.1", + "resolved": "https://registry.npmjs.org/rehype-parse/-/rehype-parse-7.0.1.tgz", + "integrity": "sha512-fOiR9a9xH+Le19i4fGzIEowAbwG7idy2Jzs4mOrFWBSJ0sNUgy0ev871dwWnbOo371SjgjG4pwzrbgSVrKxecw==", + "dependencies": { + "hast-util-from-parse5": "^6.0.0", + "parse5": "^6.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/rehype-stringify": { + "version": "8.0.0", + "resolved": "https://registry.npmjs.org/rehype-stringify/-/rehype-stringify-8.0.0.tgz", + "integrity": "sha512-VkIs18G0pj2xklyllrPSvdShAV36Ff3yE5PUO9u36f6+2qJFnn22Z5gKwBOwgXviux4UC7K+/j13AnZfPICi/g==", + "dependencies": { + "hast-util-to-html": "^7.1.1" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/remark-admonitions": { + "version": "1.4.2", + "resolved": "git+ssh://git@github.com/josh-heyer/remark-admonitions.git#c57f82f3c5f21eeaf3c045ca111d294f13caab71", + "dependencies": { + "rehype-parse": "^6.0.2 || ^7.0.1", + "unified": "^9.2.2", + "unist-util-visit": "^2.0.3" + } + }, + "node_modules/remark-frontmatter": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/remark-frontmatter/-/remark-frontmatter-2.0.0.tgz", + "integrity": "sha512-uNOQt4tO14qBFWXenF0MLC4cqo3dv8qiHPGyjCl1rwOT0LomSHpcElbjjVh5CwzElInB38HD8aSRVugKQjeyHA==", + "dependencies": { + "fault": "^1.0.1" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/remark-mdx": { + "version": "1.6.22", + "resolved": "https://registry.npmjs.org/remark-mdx/-/remark-mdx-1.6.22.tgz", + "integrity": "sha512-phMHBJgeV76uyFkH4rvzCftLfKCr2RZuF+/gmVcaKrpsihyzmhXjA0BEMDaPTXG5y8qZOKPVo83NAOX01LPnOQ==", + "dependencies": { + "@babel/core": "7.12.9", + "@babel/helper-plugin-utils": "7.10.4", + "@babel/plugin-proposal-object-rest-spread": "7.12.1", + "@babel/plugin-syntax-jsx": "7.12.1", + "@mdx-js/util": "1.6.22", + "is-alphabetical": "1.0.4", + "remark-parse": "8.0.3", + "unified": "9.2.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/remark-mdx/node_modules/unified": { + "version": "9.2.0", + "resolved": "https://registry.npmjs.org/unified/-/unified-9.2.0.tgz", + "integrity": "sha512-vx2Z0vY+a3YoTj8+pttM3tiJHCwY5UFbYdiWrwBEbHmK8pvsPj2rtAX2BFfgXen8T39CJWblWRDT4L5WGXtDdg==", + "dependencies": { + "bail": "^1.0.0", + "extend": "^3.0.0", + "is-buffer": "^2.0.0", + "is-plain-obj": "^2.0.0", + "trough": "^1.0.0", + "vfile": "^4.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/remark-parse": { + "version": "8.0.3", + "resolved": "https://registry.npmjs.org/remark-parse/-/remark-parse-8.0.3.tgz", + "integrity": "sha512-E1K9+QLGgggHxCQtLt++uXltxEprmWzNfg+MxpfHsZlrddKzZ/hZyWHDbK3/Ap8HJQqYJRXP+jHczdL6q6i85Q==", + "dependencies": { + "ccount": "^1.0.0", + "collapse-white-space": "^1.0.2", + "is-alphabetical": "^1.0.0", + "is-decimal": "^1.0.0", + "is-whitespace-character": "^1.0.0", + "is-word-character": "^1.0.0", + "markdown-escapes": "^1.0.0", + "parse-entities": "^2.0.0", + "repeat-string": "^1.5.4", + "state-toggle": "^1.0.0", + "trim": "0.0.1", + "trim-trailing-lines": "^1.0.0", + "unherit": "^1.0.4", + "unist-util-remove-position": "^2.0.0", + "vfile-location": "^3.0.0", + "xtend": "^4.0.1" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/remark-rehype": { + "version": "8.1.0", + "resolved": "https://registry.npmjs.org/remark-rehype/-/remark-rehype-8.1.0.tgz", + "integrity": "sha512-EbCu9kHgAxKmW1yEYjx3QafMyGY3q8noUbNUI5xyKbaFP89wbhDrKxyIQNukNYthzjNHZu6J7hwFg7hRm1svYA==", + "dependencies": { + "mdast-util-to-hast": "^10.2.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/remark-stringify": { + "version": "8.1.1", + "resolved": "https://registry.npmjs.org/remark-stringify/-/remark-stringify-8.1.1.tgz", + "integrity": "sha512-q4EyPZT3PcA3Eq7vPpT6bIdokXzFGp9i85igjmhRyXWmPs0Y6/d2FYwUNotKAWyLch7g0ASZJn/KHHcHZQ163A==", + "dependencies": { + "ccount": "^1.0.0", + "is-alphanumeric": "^1.0.0", + "is-decimal": "^1.0.0", + "is-whitespace-character": "^1.0.0", + "longest-streak": "^2.0.1", + "markdown-escapes": "^1.0.0", + "markdown-table": "^2.0.0", + "mdast-util-compact": "^2.0.0", + "parse-entities": "^2.0.0", + "repeat-string": "^1.5.4", + "state-toggle": "^1.0.0", + "stringify-entities": "^3.0.0", + "unherit": "^1.0.4", + "xtend": "^4.0.1" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/repeat-string": { + "version": "1.6.1", + "resolved": "https://registry.npmjs.org/repeat-string/-/repeat-string-1.6.1.tgz", + "integrity": "sha512-PV0dzCYDNfRi1jCDbJzpW7jNNDRuCOG/jI5ctQcGKt/clZD+YcPS3yIlWuTJMmESC8aevCFmWJy5wjAFgNqN6w==", + "engines": { + "node": ">=0.10" + } + }, + "node_modules/resolve": { + "version": "1.22.8", + "resolved": "https://registry.npmjs.org/resolve/-/resolve-1.22.8.tgz", + "integrity": "sha512-oKWePCxqpd6FlLvGV1VU0x7bkPmmCNolxzjMf4NczoDnQcIWrAF+cPtZn5i6n+RfD2d9i0tzpKnG6Yk168yIyw==", + "dependencies": { + "is-core-module": "^2.13.0", + "path-parse": "^1.0.7", + "supports-preserve-symlinks-flag": "^1.0.0" + }, + "bin": { + "resolve": "bin/resolve" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/reusify": { + "version": "1.0.4", + "resolved": "https://registry.npmjs.org/reusify/-/reusify-1.0.4.tgz", + "integrity": "sha512-U9nH88a3fc/ekCF1l0/UP1IosiuIjyTh7hBvXVMHYgVcfGvt897Xguj2UOLDeI5BG2m7/uwyaLVT6fbtCwTyzw==", + "engines": { + "iojs": ">=1.0.0", + "node": ">=0.10.0" + } + }, + "node_modules/run-parallel": { + "version": "1.2.0", + "resolved": "https://registry.npmjs.org/run-parallel/-/run-parallel-1.2.0.tgz", + "integrity": "sha512-5l4VyZR86LZ/lDxZTR6jqL8AFE2S0IFLMP26AbjsLVADxHdhB/c0GUsH+y39UfCi3dzz8OlQuPmnaJOMoDHQBA==", + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/feross" + }, + { + "type": "patreon", + "url": "https://www.patreon.com/feross" + }, + { + "type": "consulting", + "url": "https://feross.org/support" + } + ], + "dependencies": { + "queue-microtask": "^1.2.2" + } + }, + "node_modules/semver": { + "version": "5.7.2", + "resolved": "https://registry.npmjs.org/semver/-/semver-5.7.2.tgz", + "integrity": "sha512-cBznnQ9KjJqU67B52RMC65CMarK2600WFnbkcaiwWq3xy/5haFJlshgnpjovMVJ+Hff49d8GEn0b87C5pDQ10g==", + "bin": { + "semver": "bin/semver" + } + }, + "node_modules/source-map": { + "version": "0.5.7", + "resolved": "https://registry.npmjs.org/source-map/-/source-map-0.5.7.tgz", + "integrity": "sha512-LbrmJOMUSdEVxIKvdcJzQC+nQhe8FUZQTXQy6+I75skNgn3OoQ0DZA8YnFa7gp8tqtL3KPf1kmo0R5DoApeSGQ==", + "engines": { + "node": ">=0.10.0" + } + }, + "node_modules/space-separated-tokens": { + "version": "1.1.5", + "resolved": "https://registry.npmjs.org/space-separated-tokens/-/space-separated-tokens-1.1.5.tgz", + "integrity": "sha512-q/JSVd1Lptzhf5bkYm4ob4iWPjx0KiRe3sRFBNrVqbJkFaBm5vbbowy1mymoPNLRa52+oadOhJ+K49wsSeSjTA==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/state-toggle": { + "version": "1.0.3", + "resolved": "https://registry.npmjs.org/state-toggle/-/state-toggle-1.0.3.tgz", + "integrity": "sha512-d/5Z4/2iiCnHw6Xzghyhb+GcmF89bxwgXG60wjIiZaxnymbyOmI8Hk4VqHXiVVp6u2ysaskFfXg3ekCj4WNftQ==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/stringify-entities": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/stringify-entities/-/stringify-entities-3.1.0.tgz", + "integrity": "sha512-3FP+jGMmMV/ffZs86MoghGqAoqXAdxLrJP4GUdrDN1aIScYih5tuIO3eF4To5AJZ79KDZ8Fpdy7QJnK8SsL1Vg==", + "dependencies": { + "character-entities-html4": "^1.0.0", + "character-entities-legacy": "^1.0.0", + "xtend": "^4.0.0" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/supports-color": { + "version": "5.5.0", + "resolved": "https://registry.npmjs.org/supports-color/-/supports-color-5.5.0.tgz", + "integrity": "sha512-QjVjwdXIt408MIiAqCX4oUKsgU2EqAGzs2Ppkm4aQYbjm+ZEWEcW4SfFNTr4uMNZma0ey4f5lgLrkB0aX0QMow==", + "dependencies": { + "has-flag": "^3.0.0" + }, + "engines": { + "node": ">=4" + } + }, + "node_modules/supports-preserve-symlinks-flag": { + "version": "1.0.0", + "resolved": "https://registry.npmjs.org/supports-preserve-symlinks-flag/-/supports-preserve-symlinks-flag-1.0.0.tgz", + "integrity": "sha512-ot0WnXS9fgdkgIcePe6RHNk1WA8+muPa6cSjeR3V8K27q9BB1rTE3R1p7Hv0z1ZyAc8s6Vvv8DIyWf681MAt0w==", + "engines": { + "node": ">= 0.4" + }, + "funding": { + "url": "https://github.com/sponsors/ljharb" + } + }, + "node_modules/to-fast-properties": { + "version": "2.0.0", + "resolved": "https://registry.npmjs.org/to-fast-properties/-/to-fast-properties-2.0.0.tgz", + "integrity": "sha512-/OaKK0xYrs3DmxRYqL/yDc+FxFUVYhDlXMhRmv3z915w2HF1tnN1omB354j8VUGO/hbRzyD6Y3sA7v7GS/ceog==", + "engines": { + "node": ">=4" + } + }, + "node_modules/to-regex-range": { + "version": "5.0.1", + "resolved": "https://registry.npmjs.org/to-regex-range/-/to-regex-range-5.0.1.tgz", + "integrity": "sha512-65P7iz6X5yEr1cwcgvQxbbIw7Uk3gOy5dIdtZ4rDveLqhrdJP+Li/Hx6tyK0NEb+2GCyneCMJiGqrADCSNk8sQ==", + "dependencies": { + "is-number": "^7.0.0" + }, + "engines": { + "node": ">=8.0" + } + }, + "node_modules/to-vfile": { + "version": "6.1.0", + "resolved": "https://registry.npmjs.org/to-vfile/-/to-vfile-6.1.0.tgz", + "integrity": "sha512-BxX8EkCxOAZe+D/ToHdDsJcVI4HqQfmw0tCkp31zf3dNP/XWIAjU4CmeuSwsSoOzOTqHPOL0KUzyZqJplkD0Qw==", + "dependencies": { + "is-buffer": "^2.0.0", + "vfile": "^4.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/trim": { + "version": "1.0.1", + "resolved": "https://registry.npmjs.org/trim/-/trim-1.0.1.tgz", + "integrity": "sha512-3JVP2YVqITUisXblCDq/Bi4P9457G/sdEamInkyvCsjbTcXLXIiG7XCb4kGMFWh6JGXesS3TKxOPtrncN/xe8w==", + "deprecated": "Use String.prototype.trim() instead" + }, + "node_modules/trim-trailing-lines": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/trim-trailing-lines/-/trim-trailing-lines-1.1.4.tgz", + "integrity": "sha512-rjUWSqnfTNrjbB9NQWfPMH/xRK1deHeGsHoVfpxJ++XeYXE0d6B1En37AHfw3jtfTU7dzMzZL2jjpe8Qb5gLIQ==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/trough": { + "version": "1.0.5", + "resolved": "https://registry.npmjs.org/trough/-/trough-1.0.5.tgz", + "integrity": "sha512-rvuRbTarPXmMb79SmzEp8aqXNKcK+y0XaB298IXueQ8I2PsrATcPBCSPyK/dDNa2iWOhKlfNnOjdAOTBU/nkFA==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/unherit": { + "version": "1.1.3", + "resolved": "https://registry.npmjs.org/unherit/-/unherit-1.1.3.tgz", + "integrity": "sha512-Ft16BJcnapDKp0+J/rqFC3Rrk6Y/Ng4nzsC028k2jdDII/rdZ7Wd3pPT/6+vIIxRagwRc9K0IUX0Ra4fKvw+WQ==", + "dependencies": { + "inherits": "^2.0.0", + "xtend": "^4.0.0" + }, + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/unified": { + "version": "9.2.2", + "resolved": "https://registry.npmjs.org/unified/-/unified-9.2.2.tgz", + "integrity": "sha512-Sg7j110mtefBD+qunSLO1lqOEKdrwBFBrR6Qd8f4uwkhWNlbkaqwHse6e7QvD3AP/MNoJdEDLaf8OxYyoWgorQ==", + "dependencies": { + "bail": "^1.0.0", + "extend": "^3.0.0", + "is-buffer": "^2.0.0", + "is-plain-obj": "^2.0.0", + "trough": "^1.0.0", + "vfile": "^4.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/unist-builder": { + "version": "2.0.3", + "resolved": "https://registry.npmjs.org/unist-builder/-/unist-builder-2.0.3.tgz", + "integrity": "sha512-f98yt5pnlMWlzP539tPc4grGMsFaQQlP/vM396b00jngsiINumNmsY8rkXjfoi1c6QaM8nQ3vaGDuoKWbe/1Uw==", + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/unist-util-generated": { + "version": "1.1.6", + "resolved": "https://registry.npmjs.org/unist-util-generated/-/unist-util-generated-1.1.6.tgz", + "integrity": "sha512-cln2Mm1/CZzN5ttGK7vkoGw+RZ8VcUH6BtGbq98DDtRGquAAOXig1mrBQYelOwMXYS8rK+vZDyyojSjp7JX+Lg==", + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/unist-util-is": { + "version": "4.1.0", + "resolved": "https://registry.npmjs.org/unist-util-is/-/unist-util-is-4.1.0.tgz", + "integrity": "sha512-ZOQSsnce92GrxSqlnEEseX0gi7GH9zTJZ0p9dtu87WRb/37mMPO2Ilx1s/t9vBHrFhbgweUwb+t7cIn5dxPhZg==", + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/unist-util-position": { + "version": "3.1.0", + "resolved": "https://registry.npmjs.org/unist-util-position/-/unist-util-position-3.1.0.tgz", + "integrity": "sha512-w+PkwCbYSFw8vpgWD0v7zRCl1FpY3fjDSQ3/N/wNd9Ffa4gPi8+4keqt99N3XW6F99t/mUzp2xAhNmfKWp95QA==", + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/unist-util-remove-position": { + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/unist-util-remove-position/-/unist-util-remove-position-2.0.1.tgz", + "integrity": "sha512-fDZsLYIe2uT+oGFnuZmy73K6ZxOPG/Qcm+w7jbEjaFcJgbQ6cqjs/eSPzXhsmGpAsWPkqZM9pYjww5QTn3LHMA==", + "dependencies": { + "unist-util-visit": "^2.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/unist-util-stringify-position": { + "version": "2.0.3", + "resolved": "https://registry.npmjs.org/unist-util-stringify-position/-/unist-util-stringify-position-2.0.3.tgz", + "integrity": "sha512-3faScn5I+hy9VleOq/qNbAd6pAx7iH5jYBMS9I1HgQVijz/4mv5Bvw5iw1sC/90CODiKo81G/ps8AJrISn687g==", + "dependencies": { + "@types/unist": "^2.0.2" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/unist-util-visit": { + "version": "2.0.3", + "resolved": "https://registry.npmjs.org/unist-util-visit/-/unist-util-visit-2.0.3.tgz", + "integrity": "sha512-iJ4/RczbJMkD0712mGktuGpm/U4By4FfDonL7N/9tATGIF4imikjOuagyMY53tnZq3NP6BcmlrHhEKAfGWjh7Q==", + "dependencies": { + "@types/unist": "^2.0.0", + "unist-util-is": "^4.0.0", + "unist-util-visit-parents": "^3.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/unist-util-visit-parents": { + "version": "5.1.3", + "resolved": "https://registry.npmjs.org/unist-util-visit-parents/-/unist-util-visit-parents-5.1.3.tgz", + "integrity": "sha512-x6+y8g7wWMyQhL1iZfhIPhDAs7Xwbn9nRosDXl7qoPTSCy0yNxnKc+hWokFifWQIDGi154rdUqKvbCa4+1kLhg==", + "dependencies": { + "@types/unist": "^2.0.0", + "unist-util-is": "^5.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/unist-util-visit-parents/node_modules/unist-util-is": { + "version": "5.2.1", + "resolved": "https://registry.npmjs.org/unist-util-is/-/unist-util-is-5.2.1.tgz", + "integrity": "sha512-u9njyyfEh43npf1M+yGKDGVPbY/JWEemg5nH05ncKPfi+kBbKBJoTdsogMu33uhytuLlv9y0O7GH7fEdwLdLQw==", + "dependencies": { + "@types/unist": "^2.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/unist-util-visit/node_modules/unist-util-visit-parents": { + "version": "3.1.1", + "resolved": "https://registry.npmjs.org/unist-util-visit-parents/-/unist-util-visit-parents-3.1.1.tgz", + "integrity": "sha512-1KROIZWo6bcMrZEwiH2UrXDyalAa0uqzWCxCJj6lPOvTve2WkfgCytoDTPaMnodXh1WrXOq0haVYHj99ynJlsg==", + "dependencies": { + "@types/unist": "^2.0.0", + "unist-util-is": "^4.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/vfile": { + "version": "4.2.1", + "resolved": "https://registry.npmjs.org/vfile/-/vfile-4.2.1.tgz", + "integrity": "sha512-O6AE4OskCG5S1emQ/4gl8zK586RqA3srz3nfK/Viy0UPToBc5Trp9BVFb1u0CjsKrAWwnpr4ifM/KBXPWwJbCA==", + "dependencies": { + "@types/unist": "^2.0.0", + "is-buffer": "^2.0.0", + "unist-util-stringify-position": "^2.0.0", + "vfile-message": "^2.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/vfile-location": { + "version": "3.2.0", + "resolved": "https://registry.npmjs.org/vfile-location/-/vfile-location-3.2.0.tgz", + "integrity": "sha512-aLEIZKv/oxuCDZ8lkJGhuhztf/BW4M+iHdCwglA/eWc+vtuRFJj8EtgceYFX4LRjOhCAAiNHsKGssC6onJ+jbA==", + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/vfile-message": { + "version": "2.0.4", + "resolved": "https://registry.npmjs.org/vfile-message/-/vfile-message-2.0.4.tgz", + "integrity": "sha512-DjssxRGkMvifUOJre00juHoP9DPWuzjxKuMDrhNbk2TdaYYBNMStsNhEOt3idrtI12VQYM/1+iM0KOzXi4pxwQ==", + "dependencies": { + "@types/unist": "^2.0.0", + "unist-util-stringify-position": "^2.0.0" + }, + "funding": { + "type": "opencollective", + "url": "https://opencollective.com/unified" + } + }, + "node_modules/web-namespaces": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/web-namespaces/-/web-namespaces-1.1.4.tgz", + "integrity": "sha512-wYxSGajtmoP4WxfejAPIr4l0fVh+jeMXZb08wNc0tMg6xsfZXj3cECqIK0G7ZAqUq0PP8WlMDtaOGVBTAWztNw==", + "funding": { + "type": "github", + "url": "https://github.com/sponsors/wooorm" + } + }, + "node_modules/xtend": { + "version": "4.0.2", + "resolved": "https://registry.npmjs.org/xtend/-/xtend-4.0.2.tgz", + "integrity": "sha512-LKYU1iAXJXUgAXn9URjiu+MWhyUXHsvfp7mcuYm9dSUKK0/CjtrUwFAxD82/mCWbtLsGjFIad0wIsod4zrTAEQ==", + "engines": { + "node": ">=0.4" + } + }, + "node_modules/yaml": { + "version": "2.3.1", + "resolved": "https://registry.npmjs.org/yaml/-/yaml-2.3.1.tgz", + "integrity": "sha512-2eHWfjaoXgTBC2jNM1LRef62VQa0umtvRiDSk6HSzW7RvS5YtkabJrwYLLEKWBc8a5U2PTSCs+dJjUTJdlHsWQ==", + "engines": { + "node": ">= 14" + } + } + } +} diff --git a/tools/user/reorg/links/package.json b/tools/user/reorg/links/package.json new file mode 100644 index 00000000000..3f039b07b90 --- /dev/null +++ b/tools/user/reorg/links/package.json @@ -0,0 +1,35 @@ +{ + "name": "links", + "version": "1.0.0", + "description": "Update links to moved files based on git history for use during content reorganization", + "main": "update-links-to-renames.js", + "type": "module", + "scripts": { + "test": "echo \"Error: no test specified\" && exit 1" + }, + "author": "josh.heyer@enterprisedb.com", + "license": "UNLICENSED", + "dependencies": { + "fast-glob": "^3.2.12", + "yaml": "^2.3.1", + "github-slugger": "^1.5.0", + "hast-util-to-html": "^7.1.3", + "html-void-elements": "^2.0.1", + "is-absolute-url": "^3.0.3", + "mdast-util-to-string": "^1.1.0", + "rehype-parse": "^7.0.1", + "rehype-stringify": "^8.0.0", + "remark-admonitions": "github:josh-heyer/remark-admonitions", + "remark-frontmatter": "^2.0.0", + "remark-mdx": "^1.6.22", + "remark-rehype": "^8.0.0", + "remark-stringify": "^8.1.1", + "to-vfile": "^6.1.0", + "unified": "^9.2.2", + "unist-util-visit": "^2.0.3", + "unist-util-visit-parents": "^5.1.3" + }, + "overrides": { + "trim": ">=0.0.3" + } +} diff --git a/tools/user/reorg/links/update-links-to-renames.js b/tools/user/reorg/links/update-links-to-renames.js new file mode 100644 index 00000000000..e45ca45dfa9 --- /dev/null +++ b/tools/user/reorg/links/update-links-to-renames.js @@ -0,0 +1,298 @@ +// Updates links based on git renames detected between this and a base branch +// Takes one optional parameter: the name of the base branch (default: develop) +// Aims to be idempotent for a given branch: running twice will not change anything if the branch itself hasn't changed + +import path from "path"; +import remarkParse from "remark-parse"; +import mdx from "remark-mdx"; +import unified from "unified"; +import remarkFrontmatter from "remark-frontmatter"; +import remarkStringify from "remark-stringify"; +import admonitions from "remark-admonitions"; +import { fileURLToPath } from "url"; +import { exec, execSync } from "child_process"; +import glob from "fast-glob"; +import { visitParents } from "unist-util-visit-parents"; +import remarkMdxEmbeddedHast from "./lib/mdast-embedded-hast.mjs"; +import toVfile from "to-vfile"; +const { read, write } = toVfile; + +const args = process.argv.slice(2); +const __dirname = path.dirname(fileURLToPath(import.meta.url)); +const basePath = path.resolve(__dirname, "../../../.."); +const baseBranch = args[0] || "origin/develop"; + +// add path here to ignore link warnings +const noWarnPaths = ["/playground/1/01_examples/link-tests"]; + +// first bit of this script is synchronous - there's absolutely nothing we can do until git gives us the list of renames, +// and I don't feel like dealing with the crufty old exec() interface +// Note that it's entirely possible for the list of renames to have the same +// files listed multiple times, forming a chain - that gets handled later +// For when I inevitably forget why I did it this way: +// - https://git-scm.com/docs/gitrevisions +// - https://git-scm.com/docs/git-log +let branch = ""; +let renames = []; +try { + branch = execSync("git rev-parse --abbrev-ref HEAD", { cwd: basePath }) + .toString() + .trim(); + renames = execSync( + `git log --diff-filter=R --find-renames=4 --name-status --pretty=format: ${baseBranch}..${branch} + git diff --diff-filter=R --find-renames=4 --name-status --pretty=format: ${baseBranch}..${branch}`, + { cwd: basePath }, + ) + .toString() + .split("\n") + .filter((l) => l.startsWith("R") && l.endsWith(".mdx")) + .map((l) => l.split("\t").slice(1)); +} catch (e) { + throw new Error("Error running git: " + e.toString()); +} + +console.log( + `Found ${renames.length} renames between ${baseBranch} and ${branch}`, +); + +// if this build was triggered by a GH action in response to a PR, +// use the head ref (the branch that someone is requesting be merged) +// if this process was otherwise triggered by a GH action, use the current branch name +const ghBranch = process.env.GITHUB_HEAD_REF || process.env.GITHUB_REF; + +const formatErrorPath = (path, line, column) => { + return ghBranch + ? `https://github.com/EnterpriseDB/docs/blob/${branch}/${path}?plain=1#L${line}` + : `${path}:${line}:${column}`; +}; + +// this does everything else: +// - creates a map of previous URL paths to current URL paths based on the git-identified renames +// - reads each mdx file in turn +// - finds links that point to a previous URL path +// - updates these links to point to the current path +// - writes out the file (if it has changed) +const run = async () => { + const mapRenames = []; + const mapOldPathToNew = new Map(); + const mapNewPathToOld = new Map(); + for (const [before, after] of renames) { + // handle chains of renames such that when: + // c->d + // b->c + // a->b + // mapOldPathToNew[a] = d + // mapOldPathToNew[b] = d + // mapOldPathToNew[c] = d + // mapNewPathToOld[d] = [d,c,b,a] + mapRenames.push({ before, after }); + let current = after; + for ( + let renameIndex = mapRenames.length - 1; + renameIndex >= 0; + renameIndex = mapRenames.findLastIndex( + (r, i) => r.before === current && i < renameIndex, + ) + ) { + current = mapRenames[renameIndex].after; + } + + const oldUrlPath = fsPathToURLPath(before); + const newUrlPath = fsPathToURLPath(current); + mapOldPathToNew.set(oldUrlPath, { + path: newUrlPath, + index: isIndex(current), + }); + if (!mapNewPathToOld.has(newUrlPath)) + mapNewPathToOld.set(newUrlPath, [ + { path: newUrlPath, index: isIndex(current) }, + ]); + mapNewPathToOld + .get(newUrlPath) + .push({ path: oldUrlPath, index: isIndex(before) }); + if (before.startsWith("product_docs")) { + const oldUnversionedUrlPath = latestVersionURLPath(before); + const newUnversionedUrlPath = latestVersionURLPath(current); + mapOldPathToNew.set(oldUnversionedUrlPath, { + path: newUnversionedUrlPath, + index: isIndex(current), + }); + } + } + + const sourceFiles = await glob([ + path.resolve(basePath, "product_docs/**/*.mdx"), + path.resolve(basePath, "advocacy_docs/**/*.mdx"), + ]); + + const processor = unified() + .use(remarkParse) + .use(remarkStringify, { emphasis: "*", bullet: "-", fences: true }) + .use(remarkMdxEmbeddedHast) + .use(admonitions, { + tag: "!!!", + icons: "none", + infima: true, + customTypes: { + seealso: "note", + hint: "tip", + interactive: "interactive", + }, + }) + .use(mdx) + .use(remarkFrontmatter) + .use(cleanup); + + console.log(`Scanning ${sourceFiles.length} pages for matching links`); + + const allValidUrlPaths = new Set( + sourceFiles.flatMap((p) => [fsPathToURLPath(p), latestVersionURLPath(p)]), + ); + + let found = 0, + failed = 0, + updated = 0; + for (const sourcePath of sourceFiles) { + const lastFound = found; + const input = await read(sourcePath); + const result = await processor.process(input); // should normally return input + if (lastFound !== found) { + await write(result); + ++updated; + } + } + + console.log( + `${mapOldPathToNew.size} path mappings identified, ${found} links updated, ${failed} links to reorganized content with no identifiable new path. +${updated} files updated`, + ); + + function isIndex(fsPath) { + return /\/index\.mdx?$/.test(fsPath); + } + + function fsPathToURLPath(fsPath) { + // 1. strip leading product_docs/docs and advocacy_docs + // 2. strip trailing index.mdx + // 3. strip trailing .mdx + // 4. strip trailing / + const docsLocations = /product_docs\/docs|advocacy_docs/; + return fsPath + .split(docsLocations)[1] + .replace(/\/index\.mdx$|\.mdx$/, "") + .replace(/\/$/, ""); + } + + function latestVersionURLPath(fsPath) { + const urlPath = fsPathToURLPath(fsPath); + if (!fsPath.includes("product_docs")) return urlPath; + const splitPath = urlPath.split("/"); + return path.posix.join("/", splitPath[1], "latest", ...splitPath.slice(3)); + } + + function cleanup() { + const docsUrl = "https://www.enterprisedb.com/docs"; + return (tree, file) => { + let currentPagePath = fsPathToURLPath(file.path); + let currentIsIndex = isIndex(file.path); + + const normalizeUrl = (url, pagePath, index) => { + let dest = new URL(url, "local:" + pagePath + (index ? "/" : "")); + if (dest.protocol === "local:" && dest.host === "") + dest = new URL( + docsUrl + + dest.pathname + .replace(/\/index\.mdx?$|\.mdx?$/, "") + .replace(/\/$/, "") + + dest.hash, + ); + return dest; + }; + + const relativize = ({ path: relative, index }) => { + // if path is identical to current: strip all but hash + if (relative === currentPagePath) return ""; + + const currentDirname = currentIsIndex + ? currentPagePath + : path.posix.dirname(currentPagePath); + // if dirname is identical to current: strip all but filename and hash + // if dirname contains current dirname: relative path + hash + if (path.posix.dirname(relative).startsWith(currentDirname)) + relative = path.posix.relative(currentDirname, relative); + // otherwise: full path + return relative; + }; + + const mapUrlToMovedFile = (url) => { + if (url === "/biganimal/release/getting_started/creating_a_cluster") + debugger; + let test = normalizeUrl(url, currentPagePath, currentIsIndex); + if (!test.href.startsWith(docsUrl)) return url; + if (path.posix.extname(test.pathname)) return url; + if ( + allValidUrlPaths.has( + test.pathname.replace(/^\/docs/, "").replace(/\/$/, ""), + ) + ) + return url; + + const allPagePaths = mapNewPathToOld.get(currentPagePath) || [ + { path: currentPagePath, index: currentIsIndex }, + ]; + for (let { path: pagePath, index } of allPagePaths) { + const dest = normalizeUrl(url, pagePath, index); + let remapped = dest.pathname + .replace(/^\/docs/, "") + .replace(/\/$/, ""); + if (allValidUrlPaths.has(remapped)) remapped = { path: remapped }; + else if (mapOldPathToNew.has(remapped)) + remapped = mapOldPathToNew.get(remapped); + else continue; + if (!allValidUrlPaths.has(remapped.path)) { + console.error( + `Skip remap of old invalid path ${url} to new invalid path ${remapped.path}`, + ); + failed++; + continue; + } + ++found; + remapped = relativize(remapped); + return (remapped && remapped + "/") + dest.hash; + } + + // use link-checker instead + if (false && !noWarnPaths.includes(currentPagePath)) + throw { + message: `invalid URL path: ${url} (${test.pathname})`, + severity: 1, + }; + + return url; + }; + + visitParents(tree, ["link", "element"], (node) => { + try { + if ( + node.type === "element" && + node.tagName === "a" && + node.properties.href + ) + node.properties.href = mapUrlToMovedFile(node.properties.href); + else if (node.type === "link") node.url = mapUrlToMovedFile(node.url); + } catch (e) { + console.log( + `${e.severity === 1 ? "⚠️⚠️ " : "⚠️ "} ${formatErrorPath( + file.path, + node.position.start.line, + node.position.start.column, + )}\n`, + e.message, + ); + } + }); + }; + } +}; + +run(); diff --git a/tools/user/reorg/redirects/package-lock.json b/tools/user/reorg/redirects/package-lock.json index 8bf147e55b3..e153f297e51 100644 --- a/tools/user/reorg/redirects/package-lock.json +++ b/tools/user/reorg/redirects/package-lock.json @@ -46,11 +46,11 @@ } }, "node_modules/braces": { - "version": "3.0.2", - "resolved": "https://registry.npmjs.org/braces/-/braces-3.0.2.tgz", - "integrity": "sha512-b8um+L1RzM3WDSzvhm6gIz1yfTbBt6YTlcEKAvsmqCZZFw46z626lVj9j1yEPW33H5H+lBQpZMP1k8l+78Ha0A==", + "version": "3.0.3", + "resolved": "https://registry.npmjs.org/braces/-/braces-3.0.3.tgz", + "integrity": "sha512-yQbXgO/OSZVD2IsiLlro+7Hf6Q18EJrKSEsdoMzKePKXct3gvD8oLcOQdIzGupr5Fj+EDe8gO/lxc1BzfMpxvA==", "dependencies": { - "fill-range": "^7.0.1" + "fill-range": "^7.1.1" }, "engines": { "node": ">=8" @@ -80,9 +80,9 @@ } }, "node_modules/fill-range": { - "version": "7.0.1", - "resolved": "https://registry.npmjs.org/fill-range/-/fill-range-7.0.1.tgz", - "integrity": "sha512-qOo9F+dMUmC2Lcb4BbVvnKJxTPjCm+RRpe4gDuGrzkL7mEVl/djYSu2OdQ2Pa302N4oqkSg9ir6jaLWJ2USVpQ==", + "version": "7.1.1", + "resolved": "https://registry.npmjs.org/fill-range/-/fill-range-7.1.1.tgz", + "integrity": "sha512-YsGpe3WHLK8ZYi4tWDg2Jy3ebRz2rXowDxnld4bkQB00cc/1Zw9AWnC0i9ztDJitivtQvaI9KaLyKrc+hBW0yg==", "dependencies": { "to-regex-range": "^5.0.1" }, diff --git a/tools/user/reorg/redirects/package.json b/tools/user/reorg/redirects/package.json index 17db41110f3..c8d11486185 100644 --- a/tools/user/reorg/redirects/package.json +++ b/tools/user/reorg/redirects/package.json @@ -12,5 +12,8 @@ "dependencies": { "fast-glob": "^3.2.12", "yaml": "^2.3.1" + }, + "overrides": { + "trim": ">=0.0.3" } } From a4bcc4ee2c26132f3936f42e60f4068a7570a4bb Mon Sep 17 00:00:00 2001 From: Josh Heyer Date: Tue, 30 Jul 2024 03:28:32 +0000 Subject: [PATCH 11/15] automated link corrections --- .../community/contributing/styleguide.mdx | 4 +- .../release/known_issues/known_issues_pgd.mdx | 18 ++- .../release/migration/dha_bulk_migration.mdx | 114 ++++++++++-------- .../fault_injection_testing/index.mdx | 45 ++++--- .../release/using_cluster/pgd_cli_ba.mdx | 25 ++-- .../02_sql_tutorial/09_the_sql_language.mdx | 2 +- .../08_oracle_catalog_views.mdx | 2 +- .../epas_compat_ora_dev_guide/10_ecpgplus.mdx | 2 +- .../11_system_catalog_tables.mdx | 2 +- .../02_sql_tutorial/09_the_sql_language.mdx | 2 +- .../08_oracle_catalog_views.mdx | 2 +- .../epas_compat_ora_dev_guide/10_ecpgplus.mdx | 2 +- .../11_system_catalog_tables.mdx | 2 +- .../configuring_epas.mdx | 2 +- .../apache_httpd_security_configuration.mdx | 17 +-- .../pem_application_configuration.mdx | 11 +- .../docs/pem/8/monitoring_BDR_nodes.mdx | 3 +- product_docs/docs/pem/8/pem_architecture.mdx | 59 ++++----- .../01_pem_architecture.mdx | 9 +- .../07_pem_define_connection.mdx | 3 +- .../21_performance_diagnostic.mdx | 3 +- .../8/profiling_workloads/index_advisor.mdx | 6 +- .../performance_diagnostic.mdx | 42 ++++--- .../docs/pgd/4/harp/03_installation.mdx | 15 +-- .../pgd/4/rel_notes/pgd_4.0.0_rel_notes.mdx | 38 +++--- product_docs/docs/pge/15/deploy_options.mdx | 4 +- product_docs/docs/pge/16/deploy_options.mdx | 4 +- .../1/architecture.mdx | 21 ++-- .../1/identify_images/private_registries.mdx | 6 +- .../1/known_issues.mdx | 16 +-- .../postgres_for_kubernetes/1/logging.mdx | 8 +- 31 files changed, 262 insertions(+), 227 deletions(-) diff --git a/advocacy_docs/community/contributing/styleguide.mdx b/advocacy_docs/community/contributing/styleguide.mdx index ae6809abe16..c38351cc472 100644 --- a/advocacy_docs/community/contributing/styleguide.mdx +++ b/advocacy_docs/community/contributing/styleguide.mdx @@ -389,11 +389,11 @@ Information about managing authentication is also available in the [Postgres co If you're referring to a guide on Docs 2.0, the label is the name of the guide and in italics. For example: -For information about modifying the `pg_hba.conf` file, see the [_PEM Administrator's Guide_](https://www.enterprisedb.com/docs/pem/latest/pem_admin/). +For information about modifying the `pg_hba.conf` file, see the [_PEM Administrator's Guide_](/pem/latest/). Link capitalization can be either title or sentence case: -* **Use title case** and _italics_ when referring to the linked doc by name. For example. “For information about modifying the `pg_hba.conf` file, see the [_PEM Administrator's Guide_](https://www.enterprisedb.com/docs/pem/latest/pem_admin/).”). +* **Use title case** and _italics_ when referring to the linked doc by name. For example. “For information about modifying the `pg_hba.conf` file, see the [_PEM Administrator's Guide_](/pem/latest/).”). * **Use sentence case** when linking in the middle of a sentence. For example, “\[…\] follow the identifier rules when creating \[…\]“). diff --git a/product_docs/docs/biganimal/release/known_issues/known_issues_pgd.mdx b/product_docs/docs/biganimal/release/known_issues/known_issues_pgd.mdx index d472725761b..f99711d6163 100644 --- a/product_docs/docs/biganimal/release/known_issues/known_issues_pgd.mdx +++ b/product_docs/docs/biganimal/release/known_issues/known_issues_pgd.mdx @@ -9,19 +9,22 @@ redirects: These are currently known issues in EDB Postgres Distributed (PGD) on BigAnimal as deployed in distributed high availability clusters. These known issues are tracked in our ticketing system and are expected to be resolved in a future release. -For general PGD known issues, refer to the [Known Issues](/pgd/latest/known_issues/) and [Limitations](/pgd/latest/limitations/) in the PGD documentation. +For general PGD known issues, refer to the [Known Issues](/pgd/latest/known_issues/) and [Limitations](/pgd/latest/planning/limitations/) in the PGD documentation. ## Management/administration -### Deleting a PGD data group may not fully reconcile +### Deleting a PGD data group may not fully reconcile + When deleting a PGD data group, the target group resources is physically deleted, but in some cases we have observed that the PGD nodes may not be completely partitioned from the remaining PGD Groups. We recommend avoiding use of this feature until this is fixed and removed from the known issues list. ### Adjusting PGD cluster architecture may not fully reconcile + In rare cases, we have observed that changing the node architecture of an existing PGD cluster may not complete. If a change hasn't taken effect in 1 hour, reach out to Support. ### PGD cluster may fail to create due to Azure SKU issue + In some cases, although a regional quota check may have passed initially when the PGD cluster is created, it may fail if an SKU critical for the witness nodes is unavailable across three availability zones. To check for this issue at the time of a region quota check, run: @@ -36,34 +39,41 @@ We're going to be provisioning a number of instances of in • A multiple (2 or 3) of your largest table
or
• More than one third of the capacity of your dedicated WAL disk (if configured) | - | GUC variable | Setting | - |----------------------|----------------------------------------------------------------------------------------------------------------------------------------------| - | maintenance_work_mem | 1GB | - | wal_sender_timeout | 60min | - | wal_receiver_timeout | 60min | - | max_wal_size | Set to either:
• A multiple (2 or 3) of your largest table
or
• More than one third of the capacity of your dedicated WAL disk (if configured) | - Make note of the target's proxy hostname (target-proxy) and port (target-port). You also need a user (target-user) and password (target-password) for the target cluster. The following instructions give examples for a cluster named `ab-cluster` with an `ab-group` subgroup and three nodes: `ab-node-1`, `ab-node-2`, and `ab-node3`. The cluster is accessed through a host named `ab-proxy` (the target-proxy). @@ -33,30 +32,28 @@ On BigAnimal, a cluster is configured, by default, with an `edb_admin` user (the The target-password for the target-user is available from the BigAnimal dashboard for the cluster. A database named `bdrdb` (the target-dbname) was also created. - ## Identify your data source You need the source hostname (source-host), port (source-port), database name (source-dbname), user, and password for your source database. Also, you currently need a list of tables in the database that you want to migrate to the target database. - ## Prepare a bastion server Create a virtual machine with your preferred operating system in the cloud to orchestrate your bulk loading. -* Use your EDB account. - * Obtain your EDB repository token from the [EDB Repos 2.0](https://www.enterprisedb.com/repos-downloads) page. -* Set environment variables. - * Set the `EDB_SUBSCRIPTION_TOKEN` environment variable to the repository token. -* Configure the repositories. - * Run the automated installer to install the repositories. -* Install the required software. - * Install and configure: - * psql - * PGD CLI - * Migration Toolkit - * LiveCompare +- Use your EDB account. + - Obtain your EDB repository token from the [EDB Repos 2.0](https://www.enterprisedb.com/repos-downloads) page. +- Set environment variables. + - Set the `EDB_SUBSCRIPTION_TOKEN` environment variable to the repository token. +- Configure the repositories. + - Run the automated installer to install the repositories. +- Install the required software. + - Install and configure: + - psql + - PGD CLI + - Migration Toolkit + - LiveCompare ### Use your EDB account @@ -74,13 +71,15 @@ export EDB_SUBSCRIPTION_TOKEN=your-repository-token The required software is available from the EDB repositories. You need to install the EDB repositories on your bastion server. -* Red Hat +- Red Hat + ```shell curl -1sLf "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/postgres_distributed/setup.rpm.sh" | sudo -E bash curl -1sLf "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/enterprise/setup.rpm.sh" | sudo -E bash ``` -* Ubuntu/Debian +- Ubuntu/Debian + ```shell curl -1sLf "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/postgres_distributed/setup.deb.sh" | sudo -E bash curl -1sLf "https://downloads.enterprisedb.com/$EDB_SUBSCRIPTION_TOKEN/enterprise/setup.deb.sh" | sudo -E bash @@ -94,12 +93,14 @@ Once the repositories are configured, you can install the required software. The psql command is the interactive terminal for working with PostgreSQL. It's a client application and can be installed on any operating system. Packaged with psql are pg_dump and pg_restore, command-line utilities for dumping and restoring PostgreSQL databases. -* Ubuntu +- Ubuntu + ```shell sudo apt install postgresql-client-16 ``` -* Red Hat +- Red Hat + ```shell sudo dnf install postgresql-client-16 ``` @@ -118,15 +119,18 @@ Ensure that your passwords are appropriately escaped in the `.pgpass` file. If a chmod 0600 $HOME/.pgpass ``` -#### Installing PGD CLI +#### Installing PGD CLI PGD CLI is a command-line interface for managing and monitoring PGD clusters. It's a Go application and can be installed on any operating system. -* Ubuntu +- Ubuntu + ```shell sudo apt-get install edb-pgd5-cli ``` -* Red Hat + +- Red Hat + ```shell sudo dnf install edb-pgd5-cli ``` @@ -151,19 +155,21 @@ cluster: Save it as `pgd-cli-config.yml`. -See also [Installing PGD CLI](/pgd/latest/cli/installing_cli/). - +See also [Installing PGD CLI](/pgd/latest/cli/installing/). #### Installing Migration Toolkit EDB's Migration Toolkit (MTK) is a command-line tool you can use to migrate data from a source database to a target database. It's a Java application and requires a Java runtime environment to be installed. -* Ubuntu +- Ubuntu + ```shell sudo apt-get -y install edb-migrationtoolkit sudo wget https://jdbc.postgresql.org/download/postgresql-42.7.2.jar -P /usr/edb/migrationtoolkit/lib ``` -* Red Hat + +- Red Hat + ```shell sudo apt-get -y install edb-migrationtoolkit sudo wget https://jdbc.postgresql.org/download/postgresql-42.7.2.jar -P /usr/edb/migrationtoolkit/lib @@ -175,11 +181,14 @@ See also [Installing Migration Toolkit](/migration_toolkit/latest/installing/) EDB LiveCompare is an application you can use to compare two databases and generate a report of the differences. You'll use it later in this process to verify the data migration. -* Ubuntu +- Ubuntu + ``` sudo apt-get -y install edb-livecompare ``` -* Red Hat + +- Red Hat + ``` sudo dnf -y install edb-livecompare ``` @@ -192,7 +201,6 @@ On the target cluster and within the regional group required, select one node to If you have a group `ab-group` with `ab-node-1`, `ab-node-2`, and `ab-node-3`, you can select `ab-node-1` as the destination node. - ### Set up a fence Fence off all other nodes except for the destination node. @@ -208,7 +216,6 @@ select bdr.alter_node_option('ab-node-3','route_fence','t'); The next time you connect with psql, you're directed to the write leader, which should be the destination node. To ensure that it is, you need to send two more commands. - ### Make the destination node both write and raft leader To minimize the possibility of disconnections, move the raft and write leader roles to the destination node. @@ -221,7 +228,6 @@ bdr.raft_leadership_transfer('ab-node-1',true,'ab-group'); Because you fenced off the other nodes in the group, this command triggers a write leader election in the `ab-group` that elects the `ab-node-1` as write leader. - ### Record then clear default commit scopes You need to make a record of the default commit scopes in the cluster. The next step overwrites the settings. (At the end of this process, you need to restore them.) Run: @@ -237,7 +243,7 @@ This command produces an output similar to:: -----------------+---------------------- world | ab-group | ba001_ab-group-a - ``` +``` Record these values. You can now overwrite the settings: @@ -249,7 +255,8 @@ select bdr.alter_node_group_option('ab-group','default_commit_scope', 'local'); Check that the target cluster is healthy. -* To check the overall health of the cluster, run` pgd -f pgd-cli-config.yml check-health` : +- To check the overall health of the cluster, run` pgd -f pgd-cli-config.yml check-health` : + ``` Check Status Message ----- ------ ------- @@ -259,9 +266,11 @@ Raft Ok Raft Consensus is working correctly Replslots Ok All BDR replication slots are working correctly Version Ok All nodes are running same BDR versions ``` + (When the cluster is healthy, all checks pass.) -* To verify the configuration of the cluster, run `pgd -f pgd-cli-config.yml verify-cluster`: +- To verify the configuration of the cluster, run `pgd -f pgd-cli-config.yml verify-cluster`: + ``` Check Status Groups ----- ------ ------ @@ -272,9 +281,11 @@ Witness-only group does not have any child groups There is at max 1 witness-only group iff there is even number of local Data Groups Ok There are at least 2 proxies configured per Data Group if routing is enabled Ok ``` + (When the cluster is verified, all checks.) -* To check the status of the nodes, run `pgd -f pgd-cli-config.yml show-nodes`: +- To check the status of the nodes, run `pgd -f pgd-cli-config.yml show-nodes`: + ``` Node Node ID Group Type Current State Target State Status Seq ID ---- ------- ----- ---- ------------- ------------ ------ ------ @@ -283,14 +294,13 @@ ab-node-2 2587806295 ab-group data ACTIVE ACTIVE Up 2 ab-node-3 199017004 ab-group data ACTIVE ACTIVE Up 3 ``` +- To confirm the raft leader, run `pgd -f pgd-cli-config.yml show-raft`. -* To confirm the raft leader, run `pgd -f pgd-cli-config.yml show-raft`. +- To confirm the replication slots, run `pgd -f pgd-cli-config.yml show-replslots`. -* To confirm the replication slots, run `pgd -f pgd-cli-config.yml show-replslots`. +- To confirm the subscriptions, run `pgd -f pgd-cli-config.yml show-subscriptions`. -* To confirm the subscriptions, run `pgd -f pgd-cli-config.yml show-subscriptions`. - -* To confirm the groups, run `pgd -f pgd-cli-config.yml show-groups`. +- To confirm the groups, run `pgd -f pgd-cli-config.yml show-groups`. These commands provide a snapshot of the state of the cluster before the migration begins. @@ -298,11 +308,10 @@ These commands provide a snapshot of the state of the cluster before the migrati Currently, you must migrate the data in four phases: -1. Transferring the “pre-data” using pg_dump and pg_restore, which exports and imports all the data definitions. -1. Transfer the role definitions using pg_dumpall and psql. -1. Using MTK with the `--dataonly` option to transfer only the data from each table, repeating as necessary for each table. -1. Transferring the “post-data” using pg_dump and pg_restore, which completes the data transfer. - +1. Transferring the “pre-data” using pg_dump and pg_restore, which exports and imports all the data definitions. +2. Transfer the role definitions using pg_dumpall and psql. +3. Using MTK with the `--dataonly` option to transfer only the data from each table, repeating as necessary for each table. +4. Transferring the “post-data” using pg_dump and pg_restore, which completes the data transfer. ### Transferring the pre-data @@ -472,4 +481,3 @@ LiveCompare compares the source and target databases and generates a report of t Review the report to ensure that the data migration was successful. Refer to the [LiveCompare](/livecompare/latest/) documentation for more information on using LiveCompare. - diff --git a/product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/index.mdx b/product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/index.mdx index 0fc58c2f5b3..0ab7ab33b50 100644 --- a/product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/index.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/fault_injection_testing/index.mdx @@ -12,19 +12,19 @@ the availability and recovery of the cluster. Before using fault injection testing, ensure you meet the following requirements: -+ You've connected your BigAnimal cloud account with your Azure subscription. See [Setting up your Azure Marketplace account](/biganimal/latest/getting_started/02_azure_market_setup/) for more information. -+ You have permissions in your Azure subscription to view and delete VMs and also the ability to view Kubernetes pods via Azure Kubernetes Service RBAC Reader. -+ You have PGD CLI installed. See [Installing PGD CLI](/pgd/latest/cli/installing_cli/#) for more information. -+ You've created a `pgd-cli-config.yml` file in your home directory. See [Configuring PGD CLI](/pgd/latest/cli/configuring_cli/) for more information. +- You've connected your BigAnimal cloud account with your Azure subscription. See [Setting up your Azure Marketplace account](/biganimal/latest/getting_started/02_azure_market_setup/) for more information. +- You have permissions in your Azure subscription to view and delete VMs and also the ability to view Kubernetes pods via Azure Kubernetes Service RBAC Reader. +- You have PGD CLI installed. See [Installing PGD CLI](/pgd/latest/cli/installing/) for more information. +- You've created a `pgd-cli-config.yml` file in your home directory. See [Configuring PGD CLI](/pgd/latest/cli/configuring_cli/) for more information. ## Fault injection testing steps Fault injection testing consists of the following steps: -1. Verifying cluster health -2. Determining the write leader node for your cluster -3. Deleting a write leader node from your cluster -4. Monitoring cluster health +1. Verifying cluster health +2. Determining the write leader node for your cluster +3. Deleting a write leader node from your cluster +4. Monitoring cluster health ### Verifying cluster health @@ -54,7 +54,6 @@ For help with a specific command and its parameters, enter `pgd help ` with your EDB subscription token in the following command: +To [install the PGD CLI](/pgd/latest/cli/installing/), for Debian and Ubuntu machines, replace `` with your EDB subscription token in the following command: ```bash curl -1sLf 'https://downloads.enterprisedb.com/<your-token>/postgres_distributed/setup.deb.sh' | sudo -E bash @@ -28,15 +28,16 @@ sudo yum install edb-pgd5-cli To connect to your distributed high-availability BigAnimal cluster using the PGD CLI, you need to [discover the database connection string](/pgd/latest/cli/discover_connections/). From your BigAnimal console: -1. Log in to the [BigAnimal clusters](https://portal.biganimal.com/clusters) view. -1. To show only clusters that work with PGD CLI, in the filter, set **Cluster Type** to **Distributed High Availability**. -1. Select your cluster. -1. In the view of your cluster, select the **Connect** tab. -1. Copy the read/write URI from the connection info. This is your connection string. +1. Log in to the [BigAnimal clusters](https://portal.biganimal.com/clusters) view. +2. To show only clusters that work with PGD CLI, in the filter, set **Cluster Type** to **Distributed High Availability**. +3. Select your cluster. +4. In the view of your cluster, select the **Connect** tab. +5. Copy the read/write URI from the connection info. This is your connection string. ### Using the PGD CLI with your database connection string -!!! Important +!!!Important + PGD doesn't prompt for interactive passwords. Accordingly, you need a [`.pgpass` file](https://www.postgresql.org/docs/current/libpq-pgpass.html) properly configured to allow access to the cluster. Your BigAnimal cluster's connection information page has all the information needed for the file. Without a properly configured `.pgpass`, you receive a database connection error when using a PGD CLI command, even when using the correct database connection string with the `--dsn` flag. @@ -50,10 +51,11 @@ pgd show-nodes --dsn "" ## PGD commands in BigAnimal -!!! Note +!!!Note + Three EDB Postgres Distributed CLI commands don't work with distributed high-availability BigAnimal clusters: `create-proxy`, `delete-proxy`, and `alter-proxy-option`. These commands are managed by BigAnimal, as BigAnimal runs on Kubernetes. It's a technical best practice to have the Kubernetes operator handle these functions. !!! - + The examples that follow show the most common PGD CLI commands with a BigAnimal cluster. ### `pgd check-health` @@ -90,7 +92,6 @@ p-mbx2p83u9n-a-3 2604177211 p-mbx2p83u9n-a data ACTIVE ACTIVE Up `pgd show-groups` returns all groups in your distributed high-availability BigAnimal cluster. It also notes the node that's the current write leader of each group: - ``` $ pgd show-groups --dsn "postgres://edb_admin@p-mbx2p83u9n-a.pg.biganimal.io:5432/bdrdb?sslmode=require" __OUTPUT__ @@ -105,7 +106,7 @@ p-mbx2p83u9n-a 2800873689 data world true true p-mbx2p83 `pgd switchover` manually changes the write leader of the group and can be used to simulate a [failover](/pgd/latest/quickstart/further_explore_failover). -``` +``` $ pgd switchover --group-name world --node-name p-mbx2p83u9n-a-2 --dsn "postgres://edb_admin@p-mbx2p83u9n-a.pg.biganimal.io:5432/bdrdb?sslmode=require" __OUTPUT__ switchover is complete diff --git a/product_docs/docs/epas/12/epas_compat_ora_dev_guide/02_sql_tutorial/09_the_sql_language.mdx b/product_docs/docs/epas/12/epas_compat_ora_dev_guide/02_sql_tutorial/09_the_sql_language.mdx index 8b454b2e489..cef310f004d 100644 --- a/product_docs/docs/epas/12/epas_compat_ora_dev_guide/02_sql_tutorial/09_the_sql_language.mdx +++ b/product_docs/docs/epas/12/epas_compat_ora_dev_guide/02_sql_tutorial/09_the_sql_language.mdx @@ -15,4 +15,4 @@ The *Database Compatibility for Oracle Developers SQL Guide* provides detailed i To review a copy of the guide, visit the Advanced Server website at: - +[https://www.enterprisedb.com/docs/epas/latest/epas_compat_sql/](/epas/12/epas_compat_sql/) diff --git a/product_docs/docs/epas/12/epas_compat_ora_dev_guide/08_oracle_catalog_views.mdx b/product_docs/docs/epas/12/epas_compat_ora_dev_guide/08_oracle_catalog_views.mdx index e225a02cd0b..8fe24227de5 100644 --- a/product_docs/docs/epas/12/epas_compat_ora_dev_guide/08_oracle_catalog_views.mdx +++ b/product_docs/docs/epas/12/epas_compat_ora_dev_guide/08_oracle_catalog_views.mdx @@ -11,4 +11,4 @@ legacyRedirectsGenerated: The Oracle Catalog Views provide information about database objects in a manner compatible with the Oracle data dictionary views. Information about the supported views is now available in the *Database Compatibility for Oracle Developers Catalog Views Guide*, available at: - +[https://www.enterprisedb.com/docs/epas/latest/epas_compat_cat_views/](/epas/12/epas_compat_cat_views/) diff --git a/product_docs/docs/epas/12/epas_compat_ora_dev_guide/10_ecpgplus.mdx b/product_docs/docs/epas/12/epas_compat_ora_dev_guide/10_ecpgplus.mdx index 6f718151d03..ad5a5aae8c0 100644 --- a/product_docs/docs/epas/12/epas_compat_ora_dev_guide/10_ecpgplus.mdx +++ b/product_docs/docs/epas/12/epas_compat_ora_dev_guide/10_ecpgplus.mdx @@ -20,4 +20,4 @@ As part of ECPGPlus' Pro\*C compatibility, you do not need to include the `BEGIN For more information about using ECPGPlus, see the *EDB Postgres Advanced Server ECPG Connector Guide* available from the EnterpriseDB website at: - +[https://www.enterprisedb.com/docs/epas/latest/ecpgplus_guide/](/epas/12/ecpgplus_guide/) diff --git a/product_docs/docs/epas/12/epas_compat_ora_dev_guide/11_system_catalog_tables.mdx b/product_docs/docs/epas/12/epas_compat_ora_dev_guide/11_system_catalog_tables.mdx index 40a724ed179..0988c5ed48a 100644 --- a/product_docs/docs/epas/12/epas_compat_ora_dev_guide/11_system_catalog_tables.mdx +++ b/product_docs/docs/epas/12/epas_compat_ora_dev_guide/11_system_catalog_tables.mdx @@ -12,4 +12,4 @@ The system catalog tables contain definitions of database objects that are avail For detailed information about the system catalog tables, see the *Database Compatibility for Oracle Developers Catalog Views Guide*, available at: - +[https://www.enterprisedb.com/docs/epas/latest/epas_compat_cat_views/](/epas/12/epas_compat_cat_views/) diff --git a/product_docs/docs/epas/13/epas_compat_ora_dev_guide/02_sql_tutorial/09_the_sql_language.mdx b/product_docs/docs/epas/13/epas_compat_ora_dev_guide/02_sql_tutorial/09_the_sql_language.mdx index 63a4774aa24..1bcbd2f30a2 100644 --- a/product_docs/docs/epas/13/epas_compat_ora_dev_guide/02_sql_tutorial/09_the_sql_language.mdx +++ b/product_docs/docs/epas/13/epas_compat_ora_dev_guide/02_sql_tutorial/09_the_sql_language.mdx @@ -18,4 +18,4 @@ The *Database Compatibility for Oracle Developers SQL Guide* provides detailed i To review a copy of the guide, visit the Advanced Server website at: - +[https://www.enterprisedb.com/docs/epas/latest/epas_compat_sql/](/epas/13/epas_compat_sql/) diff --git a/product_docs/docs/epas/13/epas_compat_ora_dev_guide/08_oracle_catalog_views.mdx b/product_docs/docs/epas/13/epas_compat_ora_dev_guide/08_oracle_catalog_views.mdx index e8c7285ba5e..63a772af87a 100644 --- a/product_docs/docs/epas/13/epas_compat_ora_dev_guide/08_oracle_catalog_views.mdx +++ b/product_docs/docs/epas/13/epas_compat_ora_dev_guide/08_oracle_catalog_views.mdx @@ -10,4 +10,4 @@ legacyRedirectsGenerated: The Oracle Catalog Views provide information about database objects in a manner compatible with the Oracle data dictionary views. Information about the supported views is now available in the *Database Compatibility for Oracle Developers Catalog Views Guide*, available at: - +[https://www.enterprisedb.com/docs/epas/latest/epas_compat_cat_views/](/epas/13/epas_compat_cat_views/) diff --git a/product_docs/docs/epas/13/epas_compat_ora_dev_guide/10_ecpgplus.mdx b/product_docs/docs/epas/13/epas_compat_ora_dev_guide/10_ecpgplus.mdx index aa2284d57a0..9c7a76bd8a3 100644 --- a/product_docs/docs/epas/13/epas_compat_ora_dev_guide/10_ecpgplus.mdx +++ b/product_docs/docs/epas/13/epas_compat_ora_dev_guide/10_ecpgplus.mdx @@ -20,4 +20,4 @@ As part of ECPGPlus' Pro\*C compatibility, you do not need to include the `BEGIN For more information about using ECPGPlus, see the *EDB Postgres Advanced Server ECPG Connector Guide* available from the EDB website at: - +[https://www.enterprisedb.com/docs/epas/latest/ecpgplus_guide/](/epas/13/ecpgplus_guide/) diff --git a/product_docs/docs/epas/13/epas_compat_ora_dev_guide/11_system_catalog_tables.mdx b/product_docs/docs/epas/13/epas_compat_ora_dev_guide/11_system_catalog_tables.mdx index 21a4d82ab8b..8a4a0e529b8 100644 --- a/product_docs/docs/epas/13/epas_compat_ora_dev_guide/11_system_catalog_tables.mdx +++ b/product_docs/docs/epas/13/epas_compat_ora_dev_guide/11_system_catalog_tables.mdx @@ -12,4 +12,4 @@ The system catalog tables contain definitions of database objects that are avail For detailed information about the system catalog tables, see the *Database Compatibility for Oracle Developers Catalog Views Guide*, available at: - +[https://www.enterprisedb.com/docs/epas/latest/epas_compat_cat_views/](/epas/13/epas_compat_cat_views/) diff --git a/product_docs/docs/epas/14/installing/windows/managing_an_advanced_server_installation/configuring_epas.mdx b/product_docs/docs/epas/14/installing/windows/managing_an_advanced_server_installation/configuring_epas.mdx index 43ac14b6c74..4d69e814c30 100644 --- a/product_docs/docs/epas/14/installing/windows/managing_an_advanced_server_installation/configuring_epas.mdx +++ b/product_docs/docs/epas/14/installing/windows/managing_an_advanced_server_installation/configuring_epas.mdx @@ -15,7 +15,7 @@ You can easily update parameters that determine the behavior of EDB Postgres Adv - The `pg_hba.conf` file specifies your preferences for network authentication and authorization. - The `pg_ident.conf` file maps operating system identities (user names) to EDB Postgres Advanced Server identities (roles) when using `ident`-based authentication. -For more information about Modifying the postgresql.conf file and Modifying the pg_hba.conf file, see [Setting parameters](https://www.enterprisedb.com/docs/epas/latest/epas_guide/03_database_administration/01_configuration_parameters/01_setting_new_parameters). +For more information about Modifying the postgresql.conf file and Modifying the pg_hba.conf file, see [Setting parameters](/epas/14/epas_guide/03_database_administration/01_configuration_parameters/01_setting_new_parameters/). You can use your editor of choice to open a configuration file, or on Windows navigate through the `EDB Postgres` menu to open a file. diff --git a/product_docs/docs/pem/8/considerations/pem_security_best_practices/apache_httpd_security_configuration.mdx b/product_docs/docs/pem/8/considerations/pem_security_best_practices/apache_httpd_security_configuration.mdx index ab118929888..22832009d21 100644 --- a/product_docs/docs/pem/8/considerations/pem_security_best_practices/apache_httpd_security_configuration.mdx +++ b/product_docs/docs/pem/8/considerations/pem_security_best_practices/apache_httpd_security_configuration.mdx @@ -181,6 +181,7 @@ STRICT_TRANSPORT_SECURITY = "max-age=31536000;includeSubDomains" ``` !!! Note + Adding this parameter can cause problems if config is changed. Therefore, we recommend that you add it only after PEM installation is complete and tested. ### X-Content-Type-Options @@ -213,33 +214,33 @@ By default, PEM sets `X-XSS-Protection to "1; mode=block"` in the application co To apply the changes, restart the Apache service. -For detailed information on the `config.py` file, see [Managing configuration settings](/pem/latest/pem_online_help/01_toc_pem_getting_started/03_pem_managing_configuration_settings/). +For detailed information on the `config.py` file, see [Managing configuration settings](/pem/8/pem_online_help/01_toc_pem_getting_started/03_pem_managing_configuration_settings/). ### Cookie security Cookies are small packets of data that a server sends to your browser to store configuration data. The browser sends them and all other requests to the same server, so it’s important to know how to secure cookies. Multiple configuration options in `config.py` can make cookies secure. These are the three most important options: -- SESSION_COOKIE_SECURE — The flag prevents cookies from sending over an unencrypted connection. The browser can't add the cookie to any request to a server without an encrypted channel. The browser can add cookies only to connections such as HTTPS. The default is: +- SESSION_COOKIE_SECURE — The flag prevents cookies from sending over an unencrypted connection. The browser can't add the cookie to any request to a server without an encrypted channel. The browser can add cookies only to connections such as HTTPS. The default is: ```ini SESSION_COOKIE_SECURE = True ``` -- SESSION_COOKIE_HTTPONLY — By default, JavaScript can read the content of cookies. The `HTTPOnly` flag prevents scripts from reading the cookie. Instead, the browser uses the cookie only with HTTP or HTTPS requests. Hackers can't exploit XSS vulnerabilities to learn the contents of the cookie. For example, the `sessionId` cookie never requires that it be read with a client-side script. So, you can set the `HTTPOnly` flag for `sessionId` cookies. The default is: +- SESSION_COOKIE_HTTPONLY — By default, JavaScript can read the content of cookies. The `HTTPOnly` flag prevents scripts from reading the cookie. Instead, the browser uses the cookie only with HTTP or HTTPS requests. Hackers can't exploit XSS vulnerabilities to learn the contents of the cookie. For example, the `sessionId` cookie never requires that it be read with a client-side script. So, you can set the `HTTPOnly` flag for `sessionId` cookies. The default is: ```ini SESSION_COOKIE_HTTPONLY = True ``` -- ENHANCED_COOKIE_PROTECTION — When you set this option to `True`, then a token is generated according to the IP address and user agent. In all subsequent requests, the token recalculates and compares to the one computed for the first request. If the session cookie is stolen and the attacker uses it from another location, the generated token is different. In that case, the extension clears the session and blocks the request. The default is: +- ENHANCED_COOKIE_PROTECTION — When you set this option to `True`, then a token is generated according to the IP address and user agent. In all subsequent requests, the token recalculates and compares to the one computed for the first request. If the session cookie is stolen and the attacker uses it from another location, the generated token is different. In that case, the extension clears the session and blocks the request. The default is: ```ini ENHANCED_COOKIE_PROTECTION = True ``` - !!! Note - This option can cause problems when the server deploys in dynamic IP address hosting environments, such as Kubernetes or behind load balancers. In such cases, set this option to `False`. + !!! Note + This option can cause problems when the server deploys in dynamic IP address hosting environments, such as Kubernetes or behind load balancers. In such cases, set this option to `False`. - To apply the changes, restart the Apache service. + To apply the changes, restart the Apache service. - For detailed information on `config.py` file, see [Managing Configuration Settings](/pem/latest/pem_online_help/01_toc_pem_getting_started/03_pem_managing_configuration_settings/). + For detailed information on `config.py` file, see [Managing Configuration Settings](/pem/8/pem_online_help/01_toc_pem_getting_started/03_pem_managing_configuration_settings/). diff --git a/product_docs/docs/pem/8/considerations/pem_security_best_practices/pem_application_configuration.mdx b/product_docs/docs/pem/8/considerations/pem_security_best_practices/pem_application_configuration.mdx index ef982396de8..6907487180f 100644 --- a/product_docs/docs/pem/8/considerations/pem_security_best_practices/pem_application_configuration.mdx +++ b/product_docs/docs/pem/8/considerations/pem_security_best_practices/pem_application_configuration.mdx @@ -20,11 +20,12 @@ USER_INACTIVITY_TIMEOUT = 900 ``` !!! Note + The timeout value is specified in seconds. To apply the changes, restart the Apache service. -For detailed information on the `config.py` file, see [Managing Configuration Settings](/pem/latest/pem_online_help/01_toc_pem_getting_started/03_pem_managing_configuration_settings/). +For detailed information on the `config.py` file, see [Managing Configuration Settings](/pem/8/pem_online_help/01_toc_pem_getting_started/03_pem_managing_configuration_settings/). ## RestAPI header customization @@ -82,7 +83,7 @@ CREATE ROLE user_sql_profiler WITH LOGIN NOSUPERUSER NOCREATEDB NOCREATEROLE INH GRANT pem_user, pem_comp_sqlprofiler TO user_sql_profiler; ``` -For detailed information on roles, see [PEM Roles](/pem/latest/pem_online_help/01_toc_pem_getting_started/04_pem_roles/). +For detailed information on roles, see [PEM Roles](/pem/8/pem_online_help/01_toc_pem_getting_started/04_pem_roles/). ## SQL/Protect plugin @@ -93,9 +94,10 @@ SQL/Protect is a module that allows a database administrator to protect a databa Attackers can perpetrate SQL injection attacks with several different techniques. A specific signature characterizes each technique. SQL/Protect examines queries for unauthorized relations, utility commands, SQL tautology, and unbounded DML statements. SQL/Protect gives the control back to the database administrator by alerting the administrator to potentially dangerous queries and then blocking those queries. !!! Note + This plugin works only on the EDB Postgres Advanced Server server, so this is useful only when your PEM database is hosted on the EDB Postgres Advanced Server server. -For detailed information about the SQL Profiler plugin, see [SQL Profiler](/pem/latest/pem_online_help/07_toc_pem_sql_profiler/). +For detailed information about the SQL Profiler plugin, see [SQL Profiler](/pem/8/pem_online_help/07_toc_pem_sql_profiler/). ## Password management @@ -120,7 +122,8 @@ HKEY_LOCAL_MACHINE\Software\EnterpriseDB\PEM\agent ## Changing the pemAgent and PEM backend database server certificates -By default, when you install PEM, the installer generates and uses self-signed certificates for the pemAgent and PEM database server. PemAgent uses these certificates when connecting to the PEM database server. To use your own SSL certificate for the pemAgent and PEM database server, see [Managing certificates](/pem/latest/managing_certificates/). +By default, when you install PEM, the installer generates and uses self-signed certificates for the pemAgent and PEM database server. PemAgent uses these certificates when connecting to the PEM database server. To use your own SSL certificate for the pemAgent and PEM database server, see [Managing certificates](/pem/8/managing_certificates/). !!! Note + PEM doesn't support placing the SSL CA certificates at a custom location. Don't change the location of `ca_certificate.crt` and `ca_key.key`. diff --git a/product_docs/docs/pem/8/monitoring_BDR_nodes.mdx b/product_docs/docs/pem/8/monitoring_BDR_nodes.mdx index 93f55301c1d..7f4d8baea98 100644 --- a/product_docs/docs/pem/8/monitoring_BDR_nodes.mdx +++ b/product_docs/docs/pem/8/monitoring_BDR_nodes.mdx @@ -4,13 +4,14 @@ title: "Monitoring EDB Postgres Distributed" --- !!! Tip "New Feature " + EDB Postgres Distributed support is available in PEM version 8.1.0 and later. EDB Postgres Distributed provides multi-master replication and data distribution with advanced conflict management, data-loss protection, and [throughput up to 5X faster than native logical replication](https://www.enterprisedb.com/blog/performance-improvements-edb-postgres-distributed), and enables distributed PostgreSQL clusters with high availability up to five 9s. Before you monitor nodes in a EDB Postgres Distributed cluster through the PEM console, you must first deploy a EDB Postgres Distributed cluster and ensure that your database nodes are up and running. For more information on installing EDB Postgres Distributed see [EDB Postgres Distributed](/pgd/latest). You can configure PEM to display status information about one or more EDB Postgres Distributed database nodes using dashboards in PEM version 8.1.0 and EDB Postgres Distributed version 3.7.9 and later. -To configure PEM to monitor EDB Postgres Distributed database nodes, use the PEM web client to create a server definition. Use the tabs on the [New Server Registration](/pem/latest/pem_online_help/01_toc_pem_getting_started/07_pem_define_connection/) dialog box to specify general connection properties for the EDB Postgres Distributed database node with the following exceptions: +To configure PEM to monitor EDB Postgres Distributed database nodes, use the PEM web client to create a server definition. Use the tabs on the [New Server Registration](pem_online_help/01_toc_pem_getting_started/07_pem_define_connection/) dialog box to specify general connection properties for the EDB Postgres Distributed database node with the following exceptions: - Specify the EDB Postgres Distributed-enabled database name in the **Database** field of the **PEM Agent** tab. diff --git a/product_docs/docs/pem/8/pem_architecture.mdx b/product_docs/docs/pem/8/pem_architecture.mdx index dd4cee0b092..d0e90b81365 100644 --- a/product_docs/docs/pem/8/pem_architecture.mdx +++ b/product_docs/docs/pem/8/pem_architecture.mdx @@ -14,19 +14,20 @@ redirects: Postgres Enterprise Manager (PEM) monitors and manages multiple Postgres servers through a single graphical interface. PEM can monitor the following areas of the infrastructure: -- **Hosts** — One or more servers (physical or virtual) and their operating systems. -- **Database servers** — One or more instances of PostgreSQL or EDB Postgres Advanced Server or EDB Postgres Extended Server (formerly known as 2ndQPostgres) running on a host. -- **Databases** — One or more databases and their schema objects, such as tables and indexes. +- **Hosts** — One or more servers (physical or virtual) and their operating systems. +- **Database servers** — One or more instances of PostgreSQL or EDB Postgres Advanced Server or EDB Postgres Extended Server (formerly known as 2ndQPostgres) running on a host. +- **Databases** — One or more databases and their schema objects, such as tables and indexes. !!! Note + The term Postgres refers to PostgreSQL, EDB Postgres Advanced Server, or EDB Postgres Extended Server. PEM consists of individual software components: -- **PEM server** — The PEM server is the data repository for monitoring data and a server to which agents and clients connect. The PEM server consists of an instance of PostgreSQL, an associated database for storing monitoring data, and a server that provides web services. -- **PEM agent** — The PEM agent is responsible for executing tasks and reporting statistics from the agent host and the monitored Postgres instances to the PEM server. A single PEM agent can monitor multiple installed instances of Postgres that reside on one or many hosts. -- **PEM web client** — The PEM web interface allows you to manage and monitor Postgres servers and use PEM extended functionality. The web interface software is installed with the PEM server and is accessed using any supported web browser. -- **SQL Profiler** — SQL Profiler is a Postgres server plugin to record the monitoring data and query plans for the SQL Profiler tool to analyze in PEM. This is an optional component of PEM, but the plugin must be installed in each instance of Postgres for which you want to use it. You can use the SQL Profiler with any supported version of an EDB distribution of a PostgreSQL server or EDB Postgres Advanced Server, not just those managed through the PEM server. See [SQL Profiler Configuration](/pem/latest/profiling_workloads/pem_sqlprofiler/) for details and supported versions. +- **PEM server** — The PEM server is the data repository for monitoring data and a server to which agents and clients connect. The PEM server consists of an instance of PostgreSQL, an associated database for storing monitoring data, and a server that provides web services. +- **PEM agent** — The PEM agent is responsible for executing tasks and reporting statistics from the agent host and the monitored Postgres instances to the PEM server. A single PEM agent can monitor multiple installed instances of Postgres that reside on one or many hosts. +- **PEM web client** — The PEM web interface allows you to manage and monitor Postgres servers and use PEM extended functionality. The web interface software is installed with the PEM server and is accessed using any supported web browser. +- **SQL Profiler** — SQL Profiler is a Postgres server plugin to record the monitoring data and query plans for the SQL Profiler tool to analyze in PEM. This is an optional component of PEM, but the plugin must be installed in each instance of Postgres for which you want to use it. You can use the SQL Profiler with any supported version of an EDB distribution of a PostgreSQL server or EDB Postgres Advanced Server, not just those managed through the PEM server. See [SQL Profiler Configuration](profiling_workloads/pem_sqlprofiler/) for details and supported versions. ## PEM architecture @@ -42,25 +43,27 @@ The PEM server consists of an instance of Postgres, an instance of the Apache we The instance of Postgres (a database server) and an instance of the Apache web-server HTTPD) can be on the same host or on separate hosts. - !!! Note - All the PEM features are available on either backend database server you select: PostgreSQL or EDB Postgres Advanced Server. - -- **Postgres instance (database server)** — This is the backend database server. It hosts a database named `pem`, which acts as the repository for PEM server. The `pem` database contains several schemas that store metric data collected from each monitored host, server, and database. - - - `pem` — This schema is the core of the PEM application. It contains the definitions of configuration functions, tables, or views required by the application. - - `pemdata` — This schema stores the current snapshot of the monitored data. - - `pemhistory` — This schema stores the historical monitored data. -- **Apache web server (HTTPD)** — The PEM web application is deployed as a WSGI application with HTTPD to provide web services to the client. It is made up of the following: - - **Web content presentation** — The presentation layer is created by the web application (such as browser and login page). - - **Rest API** — The REST API allows integration with other apps and services. - - **Database server administration/management** — You can perform database server administration and management activities like CREATE, ALTER, and DROP for managed and unmanaged servers. - - **Dashboard/chart generation** — Internally, the web application includes functionality that generates dashboards and charts. - - **Management tools** — The Audit Manager, Capacity Manager, Log Manager, Postgres Expert, Postgres Log Analysis Expert, and the Tuning Wizard are available in the web ppplication. +``` +!!! Note + All the PEM features are available on either backend database server you select: PostgreSQL or EDB Postgres Advanced Server. +``` + +- **Postgres instance (database server)** — This is the backend database server. It hosts a database named `pem`, which acts as the repository for PEM server. The `pem` database contains several schemas that store metric data collected from each monitored host, server, and database. + + - `pem` — This schema is the core of the PEM application. It contains the definitions of configuration functions, tables, or views required by the application. + - `pemdata` — This schema stores the current snapshot of the monitored data. + - `pemhistory` — This schema stores the historical monitored data. +- **Apache web server (HTTPD)** — The PEM web application is deployed as a WSGI application with HTTPD to provide web services to the client. It is made up of the following: + - **Web content presentation** — The presentation layer is created by the web application (such as browser and login page). + - **Rest API** — The REST API allows integration with other apps and services. + - **Database server administration/management** — You can perform database server administration and management activities like CREATE, ALTER, and DROP for managed and unmanaged servers. + - **Dashboard/chart generation** — Internally, the web application includes functionality that generates dashboards and charts. + - **Management tools** — The Audit Manager, Capacity Manager, Log Manager, Postgres Expert, Postgres Log Analysis Expert, and the Tuning Wizard are available in the web ppplication. - Other tools provide functionality on managed or unmanaged servers: - - **SQL Profiler UI integration** — SQL Profiler generates easily analyzed traces of session content. - - **Query editor/data view** — The Query editor allows you to query, edit, and view data. - - **Debugger** — The debugger helps you debug queries. - - **Performance diagnostics** — Performance diagnostics help you analyze the performance of Postgres instances. + - **SQL Profiler UI integration** — SQL Profiler generates easily analyzed traces of session content. + - **Query editor/data view** — The Query editor allows you to query, edit, and view data. + - **Debugger** — The debugger helps you debug queries. + - **Performance diagnostics** — Performance diagnostics help you analyze the performance of Postgres instances. We recommend that you use a dedicated machine to host production instances of the PEM backend database. The host might be subject to high levels of data throughput, depending on the number of database servers that are being monitored and the workloads the servers are processing. @@ -87,13 +90,13 @@ Once configured, each agent collects statistics and other information on the hos - Table access statistics - Table and index sizes -For a list of PEM probes, see [Probes](/pem/latest/pem_online_help/04_toc_pem_features/12_pem_manage_probes/03_pem_probe_config/01_pem_probes/). +For a list of PEM probes, see [Probes](pem_online_help/04_toc_pem_features/12_pem_manage_probes/03_pem_probe_config/01_pem_probes/). By default, the PEM agent bound to the database server collects the OS/database monitoring statistics and also runs any scheduled tasks/jobs for that particular database server, storing data in the `pem` database on the PEM server. The alert processing, SNMP/SMTP spoolers, and Nagios spooler data is stored in the `pem` database on the PEM server and is then processed by the PEM agent on the PEM host by default. However, you can enable processing by other PEM Agents by adjusting the SNMP/SMTP and Nagios parameters of the PEM agents. -For more information about these parameters, see [Server configuration](/pem/latest/pem_online_help/04_toc_pem_features/02_pem_server_config/01_pem_config_options/). +For more information about these parameters, see [Server configuration](pem_online_help/04_toc_pem_features/02_pem_server_config/01_pem_config_options/). ### PEM web client @@ -109,4 +112,4 @@ The plugin is installed with the EDB Postgres Advanced Server distribution but m You can use SQL Profiler on servers that aren't managed through PEM. However, to perform scheduled traces, a server must have the plugin installed and must be managed by an installed and configured PEM agent. -For more information about using SQL Profiler, see [SQL Profiler](/pem/latest/profiling_workloads/pem_sqlprofiler/). +For more information about using SQL Profiler, see [SQL Profiler](profiling_workloads/pem_sqlprofiler/). diff --git a/product_docs/docs/pem/8/pem_online_help/01_toc_pem_getting_started/01_pem_architecture.mdx b/product_docs/docs/pem/8/pem_online_help/01_toc_pem_getting_started/01_pem_architecture.mdx index 359cc7397c8..023fea7d24f 100644 --- a/product_docs/docs/pem/8/pem_online_help/01_toc_pem_getting_started/01_pem_architecture.mdx +++ b/product_docs/docs/pem/8/pem_online_help/01_toc_pem_getting_started/01_pem_architecture.mdx @@ -20,7 +20,7 @@ PEM consists of a number of individual software components; the individual compo - **PEM Server** - The PEM Server is used as the data repository for monitoring data and as a server to which both Agents and Clients connect. The PEM server consists of an instance of PostgreSQL and an associated database for storage of monitoring data, and a server that provides web services. - **PEM Agent** - The PEM Agent is responsible for executing tasks and reporting statistics from the Agent host and monitored Postgres instances to the PEM server. A single PEM Agent can monitor multiple installed instances of Postgres that reside on one or many hosts. - **PEM Web Client** - The PEM web interface allows you to manage and monitor Postgres servers and utilize PEM extended functionality. The web interface software is installed with the PEM server and is accessed via any supported web browser. -- **SQL Profiler** - SQL Profiler is a Postgres server plugin to record the monitoring data and query plans to be analysed by the SQL Profiler tool in PEM. This is an optional component of PEM, but the plugin must be installed into each instance of Postgres with which you wish to use the SQL Profiler tool. The SQL Profiler may be used with any supported version of an EnterpriseDB distribution of a PostgreSQL server or Advanced Server (not just those managed through the PEM server). See the [PEM SQL Profiler configuration docs](/pem/latest/profiling_workloads/pem_sqlprofiler/) for details and supported versions. +- **SQL Profiler** - SQL Profiler is a Postgres server plugin to record the monitoring data and query plans to be analysed by the SQL Profiler tool in PEM. This is an optional component of PEM, but the plugin must be installed into each instance of Postgres with which you wish to use the SQL Profiler tool. The SQL Profiler may be used with any supported version of an EnterpriseDB distribution of a PostgreSQL server or Advanced Server (not just those managed through the PEM server). See the [PEM SQL Profiler configuration docs](/pem/8/profiling_workloads/pem_sqlprofiler/) for details and supported versions. **PEM architecture** @@ -37,10 +37,11 @@ The PEM server consists of an instance of Postgres, an instance of the Apache we The instance of Postgres (a database server) and an instance of the Apache web-server ( HTTPD) can be on the same host or on separate hosts. - **Postgres Instance (Database server)** - This is the backend database server. It hosts a database named **pem** which acts as the repository for PEM Server. The **pem** database contains several schemas that store metric data collected from each monitored host, server, and database. - + !!! Note + All the PEM features are available irrespective of which backend database server you select, PostgreSQL or EDB Postgres Advanced Server. - + - **pem** - This schema is the core of the PEM application. It contains the definitions of configuration functions, tables, or views required by the application. - **pemdata** - This schema stores the current snapshot of the monitored data. - **pemhistory** - This schema stores the historical monitored data. @@ -103,4 +104,4 @@ The plugin is installed with the EDB Postgres Advanced Server distribution but m SQL Profiler may be used on servers that are not managed through PEM, but to perform scheduled traces, a server must have the plugin installed, and must be managed by an installed and configured PEM agent. -For more information about using SQL Profiler, see the [PEM SQL Profiler Configuration docs](/pem/latest/profiling_workloads/pem_sqlprofiler/) +For more information about using SQL Profiler, see the [PEM SQL Profiler Configuration docs](/pem/8/profiling_workloads/pem_sqlprofiler/) diff --git a/product_docs/docs/pem/8/pem_online_help/01_toc_pem_getting_started/07_pem_define_connection.mdx b/product_docs/docs/pem/8/pem_online_help/01_toc_pem_getting_started/07_pem_define_connection.mdx index 39f8f8651e3..6657e2c3c7d 100644 --- a/product_docs/docs/pem/8/pem_online_help/01_toc_pem_getting_started/07_pem_define_connection.mdx +++ b/product_docs/docs/pem/8/pem_online_help/01_toc_pem_getting_started/07_pem_define_connection.mdx @@ -106,10 +106,11 @@ On `Connection Parameters` tab On `Advanced` tab -- Specify `Yes` in the `Allow takeover?` field to specify that another agent may be signaled (for example, by a fencing script) to monitor the server. This feature allows an agent to take responsibility for the monitoring of the database server if, for example, the server is part of a [high availability](/pem/latest/pem_online_help/02_toc_pem_agent/04_pem_agent_ha/) failover process. +- Specify `Yes` in the `Allow takeover?` field to specify that another agent may be signaled (for example, by a fencing script) to monitor the server. This feature allows an agent to take responsibility for the monitoring of the database server if, for example, the server is part of a [high availability](/pem/8/pem_online_help/02_toc_pem_agent/04_pem_agent_ha/) failover process. - Use the `+` sign to add the database you want to exclude from the PEM Monitoring. You cannot exclude the database mentioned on the `Connection Parameters` tab of the `PEM Agent` tab. !!! Note + The database-level probes do not execute for excluded databases, but the server-level probes may collect the database statistics. If you experience connection problems, please visit the [connection problems](11_connect_error/#connect_error) page. diff --git a/product_docs/docs/pem/8/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx b/product_docs/docs/pem/8/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx index 6a89925ef07..3d6358caa53 100644 --- a/product_docs/docs/pem/8/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx +++ b/product_docs/docs/pem/8/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx @@ -12,9 +12,10 @@ You can use the Performance Diagnostic dashboard to analyze the database perform Peformance Diagnostic feature is supported for Advanced Server databases from PEM 7.6 version onwards and for PostgreSQL databases it is supported from PEM 8.0 onwards. !!! Note + For PostgreSQL databases, Performance Diagnostics is supported only for versions 10, 11, 12, and 13 installed on supported platforms. -For more information on EDB Wait States, see [EDB wait states docs](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states). +For more information on EDB Wait States, see [EDB wait states docs](/epas/latest/managing_performance/evaluating_wait_states/#edb-wait-states). You can analyze the Wait States data on multiple levels by narrowing down your selection of data. Each level of the graph is populated on the basis of your selection of data at the higher level. diff --git a/product_docs/docs/pem/8/profiling_workloads/index_advisor.mdx b/product_docs/docs/pem/8/profiling_workloads/index_advisor.mdx index 509d8fa7793..580f576c736 100644 --- a/product_docs/docs/pem/8/profiling_workloads/index_advisor.mdx +++ b/product_docs/docs/pem/8/profiling_workloads/index_advisor.mdx @@ -23,14 +23,14 @@ Before using Index Advisor, you must: 3. Restart the server to make your changes to take effect. -Index Advisor can make indexing recommendations based on trace data captured by SQL Profiler. To open Index Advisor, select one or more queries in the SQL Profiler Trace Data pane and select **Index Advisor** from the toolbar. For more information about configuring and using Index Advisor, see [EDB Postgres Advanced Server](/epas/latest/epas_guide/03_database_administration/02_index_advisor/). +Index Advisor can make indexing recommendations based on trace data captured by SQL Profiler. To open Index Advisor, select one or more queries in the SQL Profiler Trace Data pane and select **Index Advisor** from the toolbar. For more information about configuring and using Index Advisor, see [EDB Postgres Advanced Server](/epas/15/managing_performance/02_index_advisor/). !!! Note + Index Advisor can't analyze statements invoked by a non-superuser. If you attempt to analyze statements invoked by a non-superuser, the server log includes the following error: `ERROR: access to library "index_advisor" is not allowed`  !!! Note - We recommend that you disable Index Advisor while using the pg_dump functionality. - + We recommend that you disable Index Advisor while using the pg_dump functionality. diff --git a/product_docs/docs/pem/8/tuning_performance/performance_diagnostic.mdx b/product_docs/docs/pem/8/tuning_performance/performance_diagnostic.mdx index 1ac5df4a62b..2f9566bb258 100644 --- a/product_docs/docs/pem/8/tuning_performance/performance_diagnostic.mdx +++ b/product_docs/docs/pem/8/tuning_performance/performance_diagnostic.mdx @@ -11,32 +11,34 @@ redirects: The Performance Diagnostic dashboard analyzes the database performance for Postgres instances by monitoring the wait events. To display the diagnostic graphs, PEM uses the data collected by the EDB Wait States module. !!! Note - - For PostgreSQL databases, Performance Diagnostic is supported for version 10 or later installed on the supported RHEL platforms. - - For EDB Postgres Extended databases, Performance Diagnostic is supported for version 11 or later on the supported RHEL platforms. + - For PostgreSQL databases, Performance Diagnostic is supported for version 10 or later installed on the supported RHEL platforms. -For more information on EDB wait states, see [EDB wait states](/epas/latest/epas_guide/13_performance_analysis_and_tuning/#edb-wait-states). + - For EDB Postgres Extended databases, Performance Diagnostic is supported for version 11 or later on the supported RHEL platforms. + +For more information on EDB wait states, see [EDB wait states](/epas/latest/managing_performance/evaluating_wait_states/#edb-wait-states). To analyze the Wait States data on multiple levels, narrow down the data you select. The data you select at the higher level of the graph populates the lower level. ## Prerequisites -- For PostgreSQL, you need to install the `edb_wait_states_` package from `edb.repo`, where `` is the version of PostgreSQL Server. For the steps to install this package, see [EDB Build Repository](https://repos.enterprisedb.com/). For EDB Postgres Advanced Server, you need to install the `edb-as-server-edb-modules`, where `` is the version of EDB Postgres Advanced Server. +- For PostgreSQL, you need to install the `edb_wait_states_` package from `edb.repo`, where `` is the version of PostgreSQL Server. For the steps to install this package, see [EDB Build Repository](https://repos.enterprisedb.com/). For EDB Postgres Advanced Server, you need to install the `edb-as-server-edb-modules`, where `` is the version of EDB Postgres Advanced Server. + +- After you install the EDB Wait States module of EDB Postgres Advanced Server: -- After you install the EDB Wait States module of EDB Postgres Advanced Server: - 1. Configure the list of libraries in the `postgresql.conf` file as shown: + 1. Configure the list of libraries in the `postgresql.conf` file as shown: - ```ini - shared_preload_libraries = '$libdir/edb_wait_states' - ``` + ```ini + shared_preload_libraries = '$libdir/edb_wait_states' + ``` - 1. Restart the database server. - - 1. Create the following extension in the maintenance database: + 2. Restart the database server. - ```sql - CREATE EXTENSION edb_wait_states; - ``` + 3. Create the following extension in the maintenance database: + + ```sql + CREATE EXTENSION edb_wait_states; + ``` - You need superuser privileges to access the Performance Diagnostic dashboard. @@ -58,9 +60,9 @@ The first graph displays the number of active sessions and wait event types for The next section plots the following graphs based on the selected duration in the first graph: -Donut graph — Shows total wait event types according to the duration selection in the first graph. It can provide a better understanding of how much time was spent by those sessions on waiting for an event. - -Line graph — Plots a time series with each point representing the active sessions for each sample time. +Donut graph — Shows total wait event types according to the duration selection in the first graph. It can provide a better understanding of how much time was spent by those sessions on waiting for an event. + +Line graph — Plots a time series with each point representing the active sessions for each sample time. To differentiate each wait event type and the CPU usage more clearly, the graph for each wait event type displays in a different color. @@ -78,9 +80,9 @@ Select the eye in any row of the **SQL** tab to display a window with details on The **Wait event types** section displays the total number of wait event types for the selected session ID and query ID. It shows two types of graphs: -Donut graph — Shows the proportions of categorical data. +Donut graph — Shows the proportions of categorical data. -Timeline bar graph — Visualizes trends in counts of wait event types over time. +Timeline bar graph — Visualizes trends in counts of wait event types over time. To differentiate, each wait event type is represented by a different color in the bar graph. diff --git a/product_docs/docs/pgd/4/harp/03_installation.mdx b/product_docs/docs/pgd/4/harp/03_installation.mdx index 10b1b6bbe68..5d566c7fbd9 100644 --- a/product_docs/docs/pgd/4/harp/03_installation.mdx +++ b/product_docs/docs/pgd/4/harp/03_installation.mdx @@ -7,8 +7,8 @@ redirects: A standard installation of HARP includes two system services: -* HARP Manager (`harp-manager`) on the node being managed -* HARP Proxy (`harp-proxy`) elsewhere +- HARP Manager (`harp-manager`) on the node being managed +- HARP Proxy (`harp-proxy`) elsewhere There are two ways to install and configure these services to manage Postgres for proper quorum-based connection routing. @@ -18,17 +18,18 @@ Postgres for proper quorum-based connection routing. HARP has dependencies on external software. These must fit a minimum version as listed here. -| Software | Min version | -|-----------|---------| -| etcd | 3.4 | +| Software | Min version | +| -------- | ----------- | +| etcd | 3.4 | ## TPAExec The easiest way to install and configure HARP is to use the EDB TPAexec utility for cluster deployment and management. For details on this software, see the -[TPAexec product page](https://www.enterprisedb.com/docs/pgd/latest/deployments/tpaexec/). +[TPAexec product page](/pgd/4/deployments/tpaexec/). !!! Note + TPAExec is currently available only through an EULA specifically dedicated to EDB Postgres Distributed cluster deployments. If you can't access the TPAExec URL, contact your sales or account representative. @@ -43,6 +44,7 @@ cluster_vars: ``` !!! Note + Versions of TPAexec earlier than 21.1 require a slightly different approach: ```yaml @@ -61,7 +63,6 @@ tpaexec deploy ${CLUSTER_DIR} No other modifications are necessary apart from cluster-specific considerations. - ## Package installation Currently CentOS/RHEL packages are provided by the EDB packaging diff --git a/product_docs/docs/pgd/4/rel_notes/pgd_4.0.0_rel_notes.mdx b/product_docs/docs/pgd/4/rel_notes/pgd_4.0.0_rel_notes.mdx index dc0fd7b65a6..be660f7c624 100644 --- a/product_docs/docs/pgd/4/rel_notes/pgd_4.0.0_rel_notes.mdx +++ b/product_docs/docs/pgd/4/rel_notes/pgd_4.0.0_rel_notes.mdx @@ -12,24 +12,22 @@ EDB Postgres Distributed version 4.0.0 contains BDR 34.0 and HARP 2.0. BDR 4.0 i semantic versioning (for details see [semver.org](https://semver.org/)). The two previous major versions are 3.7 and 3.6. -| Component | Version | Type | Description | -| --------- | ------- | ------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| BDR | 4.0.0 | Feature | BDR on EDB Postgres Advanced 14 now supports following features which were previously only available on EDB Postgres Extended:
- Commit At Most Once - a consistency feature helping an application to commit each transaction only once, even in the presence of node failures
- Eager Replication - synchronizes between the nodes of the cluster before committing a transaction to provide conflict free replication
- Decoding Worker - separation of decoding into separate worker from wal senders allowing for better scalability with many nodes
- Estimates for Replication Catch-up times
- Timestamp-based Snapshots - providing consistent reads across multiple nodes for retrieving data as they appeared or will appear at a given time
- Automated dynamic configuration of row freezing to improve consistency of UPDATE/DELETE conflicts resolution in certain corner cases
- Assesment checks
- Support for handling missing partitions as conflicts rather than errors
- Advanced DDL Handling for NOT VALID constraints and ALTER TABLE | -| BDR | 4.0.0 | Feature | BDR on community version of PostgreSQL 12-14 now supports following features which were previously only available on EDB Postgres Advanced or EDB Postgres Extended:
- Conflict-free Replicated Data Types - additional data types which provide mathematically proven consistency in asynchronous multi-master update scenarios
- Column Level Conflict Resolution - ability to use per column last-update wins resolution so that UPDATEs on different fields can be "merged" without losing either of them
- Transform Triggers - triggers that are executed on the incoming stream of data providing ability to modify it or to do advanced programmatic filtering
- Conflict triggers - triggers which are called when conflict is detected, providing a way to use custom conflict resolution techniques
- CREATE TABLE AS replication
- Parallel Apply - allow multiple writers to apply the incoming changes | -| BDR | 4.0.0 | Feature | Support streaming of large transactions.

This allows BDR to stream a large transaction (greater than `logical_decoding_work_mem` in size) either to a file on the downstream or to a writer process. This ensures that the transaction is decoded even before it's committed, thus improving parallelism. Further, the transaction can even be applied concurrently if streamed straight to a writer. This improves parallelism even more.

When large transactions are streamed to files, they are decoded and the decoded changes are sent to the downstream even before they are committed. The changes are written to a set of files and applied when the transaction finally commits. If the transaction aborts, the changes are discarded, thus wasting resources on both upstream and downstream.

Sub-transactions are also handled automatically.

This feature is available on PostgreSQL 14, EDB Postgres Extended 13+ and EDB Postgres Advanced 14, see [Choosing a Postgres distribution](/pgd/latest/choosing_server/) appendix for more details on which features can be used on which versions of Postgres.

| -| BDR | 4.0.0 | Feature | The differences that existed in earlier versions of BDR between standard and enterprise edition have been removed. With BDR 4.0 there is one extension for each supported Postgres distribution and version, i.e., PostgreSQL v12-14, EDB Postgres Extended v12-14, and EDB Postgres Advanced 12-14.

Not all features are available on all versions of PostgreSQL, the available features are reported via feature flags using either `bdr_config` command line utility or `bdr.bdr_features()` database function. See [Choosing a Postgres distribution](/pgd/latest/choosing_server/) for more details.

| -| BDR | 4.0.0 | Feature | There is no pglogical 4.0 extension that corresponds to the BDR 4.0 extension. BDR no longer has a requirement for pglogical.

This means also that only BDR extension and schema exist and any configuration parameters were renamed from `pglogical.` to `bdr.`.

| -| BDR | 4.0.0 | Feature | Some configuration options have change defaults for better post-install experience:
- Parallel apply is now enabled by default (with 2 writers). Allows for better performance, especially with streaming enabled.
- `COPY` and `CREATE INDEX CONCURRENTLY` are now streamed directly to writer in parallel (on Postgres versions where streaming is supported) to all available nodes by default, eliminating or at least reducing replication lag spikes after these operations.
- The timeout for global locks have been increased to 10 minutes
- The `bdr.min_worker_backoff_delay` now defaults to 1s so that subscriptions retry connection only once per second on error | -| BDR | 4.0.0 | Feature | Greatly reduced the chance of false positives in conflict detection during node join for table that use origin based conflict detection | +| Component | Version | Type | Description | +| --------- | ------- | ------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| BDR | 4.0.0 | Feature | BDR on EDB Postgres Advanced 14 now supports following features which were previously only available on EDB Postgres Extended:
- Commit At Most Once - a consistency feature helping an application to commit each transaction only once, even in the presence of node failures
- Eager Replication - synchronizes between the nodes of the cluster before committing a transaction to provide conflict free replication
- Decoding Worker - separation of decoding into separate worker from wal senders allowing for better scalability with many nodes
- Estimates for Replication Catch-up times
- Timestamp-based Snapshots - providing consistent reads across multiple nodes for retrieving data as they appeared or will appear at a given time
- Automated dynamic configuration of row freezing to improve consistency of UPDATE/DELETE conflicts resolution in certain corner cases
- Assesment checks
- Support for handling missing partitions as conflicts rather than errors
- Advanced DDL Handling for NOT VALID constraints and ALTER TABLE | +| BDR | 4.0.0 | Feature | BDR on community version of PostgreSQL 12-14 now supports following features which were previously only available on EDB Postgres Advanced or EDB Postgres Extended:
- Conflict-free Replicated Data Types - additional data types which provide mathematically proven consistency in asynchronous multi-master update scenarios
- Column Level Conflict Resolution - ability to use per column last-update wins resolution so that UPDATEs on different fields can be "merged" without losing either of them
- Transform Triggers - triggers that are executed on the incoming stream of data providing ability to modify it or to do advanced programmatic filtering
- Conflict triggers - triggers which are called when conflict is detected, providing a way to use custom conflict resolution techniques
- CREATE TABLE AS replication
- Parallel Apply - allow multiple writers to apply the incoming changes | +| BDR | 4.0.0 | Feature | Support streaming of large transactions.

This allows BDR to stream a large transaction (greater than `logical_decoding_work_mem` in size) either to a file on the downstream or to a writer process. This ensures that the transaction is decoded even before it's committed, thus improving parallelism. Further, the transaction can even be applied concurrently if streamed straight to a writer. This improves parallelism even more.

When large transactions are streamed to files, they are decoded and the decoded changes are sent to the downstream even before they are committed. The changes are written to a set of files and applied when the transaction finally commits. If the transaction aborts, the changes are discarded, thus wasting resources on both upstream and downstream.

Sub-transactions are also handled automatically.

This feature is available on PostgreSQL 14, EDB Postgres Extended 13+ and EDB Postgres Advanced 14, see [Choosing a Postgres distribution](/pgd/4/choosing_server/) appendix for more details on which features can be used on which versions of Postgres.

| +| BDR | 4.0.0 | Feature | The differences that existed in earlier versions of BDR between standard and enterprise edition have been removed. With BDR 4.0 there is one extension for each supported Postgres distribution and version, i.e., PostgreSQL v12-14, EDB Postgres Extended v12-14, and EDB Postgres Advanced 12-14.

Not all features are available on all versions of PostgreSQL, the available features are reported via feature flags using either `bdr_config` command line utility or `bdr.bdr_features()` database function. See [Choosing a Postgres distribution](/pgd/4/choosing_server/) for more details.

| +| BDR | 4.0.0 | Feature | There is no pglogical 4.0 extension that corresponds to the BDR 4.0 extension. BDR no longer has a requirement for pglogical.

This means also that only BDR extension and schema exist and any configuration parameters were renamed from `pglogical.` to `bdr.`.

| +| BDR | 4.0.0 | Feature | Some configuration options have change defaults for better post-install experience:
- Parallel apply is now enabled by default (with 2 writers). Allows for better performance, especially with streaming enabled.
- `COPY` and `CREATE INDEX CONCURRENTLY` are now streamed directly to writer in parallel (on Postgres versions where streaming is supported) to all available nodes by default, eliminating or at least reducing replication lag spikes after these operations.
- The timeout for global locks have been increased to 10 minutes
- The `bdr.min_worker_backoff_delay` now defaults to 1s so that subscriptions retry connection only once per second on error | +| BDR | 4.0.0 | Feature | Greatly reduced the chance of false positives in conflict detection during node join for table that use origin based conflict detection | | BDR | 4.0.0 | Feature | Move configuration of CAMO pairs to SQL catalogs

To reduce chances of misconfiguration and make CAMO pairs within the BDR cluster known globally, move the CAMO configuration from the individual node's postgresql.conf to BDR system catalogs managed by Raft. This for example can prevent against inadvertently dropping a node that's still configured to be a CAMO partner for another active node.

Please see the [Upgrades chapter](/pgd/4/upgrades/#upgrading-a-camo-enabled-cluster) for details on the upgrade process.

This deprecates GUCs `bdr.camo_partner_of` and `bdr.camo_origin_for` and replaces the functions `bdr.get_configured_camo_origin_for()` and `get_configured_camo_partner_of` with `bdr.get_configured_camo_partner`.

| -| HARP | 2.0.0 | Change | Complete rewrite of system in golang to optimize all operations | -| HARP | 2.0.0 | Change | Cluster state can now be bootstrapped or revised via YAML | -| HARP | 2.0.0 | Feature | Configuration now in YAML, configuration file changed from `harp.ini` to `config.yml` | -| HARP | 2.0.0 | Feature | HARP Proxy deprecates need for HAProxy in supported architecture.

The use of HARP Router to translate DCS contents into appropriate online or offline states for HTTP-based URI requests meant a load balancer or HAProxy was necessary to determine the lead master. HARP Proxy now does this automatically without periodic iterative status checks.

| -| HARP | 2.0.0 | Feature | Utilizes DCS key subscription to respond directly to state changes.

With relevant cluster state changes, the cluster responds immediately, resulting in improved failover and switchover times.

| -| HARP | 2.0.0 | Feature | Compatibility with etcd SSL settings.

It is now possible to communicate with etcd through SSL encryption.

| -| HARP | 2.0.0 | Feature | Zero transaction lag on switchover.

Transactions are not routed to the new lead node until all replicated transactions are replayed, thereby reducing the potential for conflicts.

| -| HARP | 2.0.0 | Feature | Experimental BDR Consensus layer.

Using BDR Consensus as the Distributed Consensus Service (DCS) reduces the amount of change needed for implementations.

| -| HARP | 2.0.0 | Feature | Experimental built-in proxy.

Proxy implementation for increased session control.

| - - +| HARP | 2.0.0 | Change | Complete rewrite of system in golang to optimize all operations | +| HARP | 2.0.0 | Change | Cluster state can now be bootstrapped or revised via YAML | +| HARP | 2.0.0 | Feature | Configuration now in YAML, configuration file changed from `harp.ini` to `config.yml` | +| HARP | 2.0.0 | Feature | HARP Proxy deprecates need for HAProxy in supported architecture.

The use of HARP Router to translate DCS contents into appropriate online or offline states for HTTP-based URI requests meant a load balancer or HAProxy was necessary to determine the lead master. HARP Proxy now does this automatically without periodic iterative status checks.

| +| HARP | 2.0.0 | Feature | Utilizes DCS key subscription to respond directly to state changes.

With relevant cluster state changes, the cluster responds immediately, resulting in improved failover and switchover times.

| +| HARP | 2.0.0 | Feature | Compatibility with etcd SSL settings.

It is now possible to communicate with etcd through SSL encryption.

| +| HARP | 2.0.0 | Feature | Zero transaction lag on switchover.

Transactions are not routed to the new lead node until all replicated transactions are replayed, thereby reducing the potential for conflicts.

| +| HARP | 2.0.0 | Feature | Experimental BDR Consensus layer.

Using BDR Consensus as the Distributed Consensus Service (DCS) reduces the amount of change needed for implementations.

| +| HARP | 2.0.0 | Feature | Experimental built-in proxy.

Proxy implementation for increased session control.

| diff --git a/product_docs/docs/pge/15/deploy_options.mdx b/product_docs/docs/pge/15/deploy_options.mdx index e4452e3cb8a..c00ec3f53e0 100644 --- a/product_docs/docs/pge/15/deploy_options.mdx +++ b/product_docs/docs/pge/15/deploy_options.mdx @@ -5,11 +5,9 @@ originalFilePath: index.md --- - - The deployment options include: -- [Installing](installing) on a virtual machine or physical server using native packages +- [Installing](installing) on a virtual machine or physical server using native packages - Deploying it with [EDB Postgres Distributed](/pgd/latest/) using [Trusted Postgres Architect](/pgd/latest/deploy-config/deploy-tpa/) diff --git a/product_docs/docs/pge/16/deploy_options.mdx b/product_docs/docs/pge/16/deploy_options.mdx index 3902933cd39..502ae6e09a8 100644 --- a/product_docs/docs/pge/16/deploy_options.mdx +++ b/product_docs/docs/pge/16/deploy_options.mdx @@ -5,11 +5,9 @@ originalFilePath: index.md --- - - The deployment options include: -- [Installing](installing) on a virtual machine or physical server using native packages +- [Installing](installing) on a virtual machine or physical server using native packages - Deploying it with [EDB Postgres Distributed](/pgd/latest/) using [Trusted Postgres Architect](/pgd/latest/deploy-config/deploy-tpa/) diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/architecture.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/architecture.mdx index 96e2d7bbdbd..e2c52bc4c89 100644 --- a/product_docs/docs/postgres_distributed_for_kubernetes/1/architecture.mdx +++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/architecture.mdx @@ -17,16 +17,16 @@ running in private, public, hybrid, or multi-cloud environments. is a multi-master implementation of Postgres designed for high performance and availability. PGD generally requires deployment using -[Trusted Postgres Architect (TPA)](/pgd/latest/tpa/), +[Trusted Postgres Architect (TPA)](/pgd/latest/deploy-config/deploy-tpa/deploying/), a tool that uses [Ansible](https://www.ansible.com) to provision and deploy PGD clusters. EDB Postgres Distributed for Kubernetes offers a different way of deploying PGD clusters, leveraging containers and Kubernetes. The advantages are that the resulting architecture: -- Is self-healing and robust. -- Is managed through declarative configuration. -- Takes advantage of the vast and growing Kubernetes ecosystem. +- Is self-healing and robust. +- Is managed through declarative configuration. +- Takes advantage of the vast and growing Kubernetes ecosystem. ## Relationship with EDB Postgres for Kubernetes @@ -62,7 +62,7 @@ EDB Postgres Distributed for Kubernetes manages the following: - Data nodes. A node is a database and is managed by EDB Postgres for Kubernetes, creating a `Cluster` with a single instance. -- [Witness nodes](https://www.enterprisedb.com/docs/pgd/latest/nodes/#witness-nodes) +- [Witness nodes](/pgd/latest/node_management/#witness-nodes) are basic database instances that don't participate in data replication. Their function is to guarantee that consensus is possible in groups with an even number of data nodes or after network partitions. Witness @@ -108,7 +108,7 @@ distributed multi-master capabilities and to offer high availability. The Always On architectures are built from either one group in a single location or two groups in two separate locations. -See [Choosing your architecture](/pgd/latest/architectures/) in the PGD documentation +See [Choosing your architecture](/pgd/latest/planning/architectures/) in the PGD documentation for more information. ## Deploying PGD on Kubernetes @@ -119,7 +119,7 @@ adaptations are necessary to translate PGD into the Kubernetes ecosystem. ### Images and operands You can configure PGD to run one of three Postgres distributions. See the -[PGD documentation](/pgd/latest/choosing_server/) +[PGD documentation](/pgd/latest/planning/choosing_server/) to understand the features of each distribution. To function in Kubernetes, containers are provided for each Postgres @@ -164,7 +164,7 @@ of Kubernetes availability zones to enable high-availability architectures, including the Always On recommended architectures. You can realize the *Always On Single Location* architecture shown in -[Choosing your architecture](/pgd/latest/architectures/) in the PGD documentation on +[Choosing your architecture](/pgd/latest/planning/architectures/) in the PGD documentation on a single Kubernetes cluster with three availability zones. ![Always On Single Region](./images/always_on_1x3_updated.png) @@ -186,13 +186,14 @@ reliably communicate with each other. ![Multiple Kubernetes clusters](./images/k8s-architecture-multi.png) -[Always On multi-location PGD architectures](https://www.enterprisedb.com/docs/pgd/latest/architectures/) +[Always On multi-location PGD architectures](/pgd/latest/planning/architectures/) can be realized on multiple Kubernetes clusters that meet the connectivity requirements. For more information, see ["Connectivity"](connectivity.md). -!!! Note Regions and availability zones +!!! Note Regions and availability zones + When creating Kubernetes clusters in different regions or availability zones for cross-regional replication, ensure the clusters can communicate with each other by enabling network connectivity. Specifically, every service created with a `-node` or `-group` suffix must be discoverable by all other `-node` and `-group` services. You can achieve this by deploying a network connectivity application like [Submariner](https://submariner.io/) on every cluster. diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/identify_images/private_registries.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/identify_images/private_registries.mdx index ae06a508bff..87b57313330 100644 --- a/product_docs/docs/postgres_distributed_for_kubernetes/1/identify_images/private_registries.mdx +++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/identify_images/private_registries.mdx @@ -8,6 +8,7 @@ Kubernetes operators, as well as various operands, are kept in private container image registries under `docker.enterprisedb.com`. !!! Important + Access to the private registries requires an account with EDB and is reserved for EDB customers with a valid [subscription plan](https://www.enterprisedb.com/products/plans-comparison#selfmanagedenterpriseplan). Credentials are run through your EDB account. @@ -32,6 +33,7 @@ log in to the EDB container registry, for example, through `docker login` or a [`kubernetes.io/dockerconfigjson` pull secret](https://kubernetes.io/docs/concepts/configuration/secret/#secret-types). !!! Important + Each repository contains all the images you can access with your plan. You don't need to connect to different repositories to access different images, such as operator or operand images. @@ -80,7 +82,8 @@ EDB Postgres Distributed (PGD) version 5 on three PostgreSQL distributions: - EDB Postgres Extended !!! Important - See [Choosing a Postgres distribution](/pgd/latest/choosing_server/) + + See [Choosing a Postgres distribution](/pgd/latest/planning/choosing_server/) in the PGD documentation for details and a comparison of PGD on the different supported PostgreSQL distributions. @@ -102,5 +105,6 @@ The table shows the image name prefix for each Postgres distribution. | EDB Postgres Advanced | 15, 14 | `edb-postgres-advanced-pgd` | `k8s_enterprise_pgd` | !!! Note Image naming + For more information on operand image naming and proxy image naming, see [Identify your image name](identify_image_name/). diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/known_issues.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/known_issues.mdx index e2cdbda98de..91397d1c8ae 100644 --- a/product_docs/docs/postgres_distributed_for_kubernetes/1/known_issues.mdx +++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/known_issues.mdx @@ -4,11 +4,11 @@ title: 'Known issues and limitations' These known issues and limitations are in the current release of EDB Postgres Distributed for Kubernetes. -## Postgres major version upgrades +## Postgres major version upgrades This version of EDB Postgres Distributed for Kubernetes **doesn't support** major version upgrades of Postgres. -## Data migration +## Data migration This version of EDB Postgres Distributed for Kubernetes **doesn't support** migrating from existing Postgres databases. @@ -21,7 +21,7 @@ This limitation applies to both the open-source and EDB versions of PgBouncer. To configure an EDB Postgres Distributed for Kubernetes environment, you must apply a `PGDGroup` YAML object to each Kubernetes cluster. Applying this object creates all necessary services for implementing a distributed architecture. - + If you added a `spec.backup` section to this `PGDGroup` object with the goal of setting up a backup configuration, the backup will fail unless you also set the `spec.backup.cron.schedule` value. @@ -29,9 +29,9 @@ Error output example: ``` The PGDGroup "region-a" is invalid: spec.backup.cron.schedule: Invalid value: "": Empty spec string -``` +``` -### Workaround +### Workaround To work around this issue, add a `spec.backup.cron.schedule` section with a schedule that meets your requirements, for example: @@ -51,12 +51,12 @@ spec: suspend: false immediate: true schedule: "0 */5 * * * *" -``` +``` -## Known issues and limitations in EDB Postgres Distributed +## Known issues and limitations in EDB Postgres Distributed All issues and limitations known for the EDB Postgres Distributed version that you include in your deployment also affect your EDB Postgres Distributed for Kubernetes instance. For example, if the EDB Postgres Distributed version you're using is 5.x, your EDB Postgres Distributed for Kubernetes -instance will be affected by these [5.x known issues](/pgd/5/known_issues/) and [5.x limitations](/pgd/5/limitations/). +instance will be affected by these [5.x known issues](/pgd/5/known_issues/) and [5.x limitations](/pgd/latest/planning/limitations/). diff --git a/product_docs/docs/postgres_for_kubernetes/1/logging.mdx b/product_docs/docs/postgres_for_kubernetes/1/logging.mdx index 6b72fc1e955..1df83d73de1 100644 --- a/product_docs/docs/postgres_for_kubernetes/1/logging.mdx +++ b/product_docs/docs/postgres_for_kubernetes/1/logging.mdx @@ -17,6 +17,7 @@ Each log entry has the following fields: - `logging_podName` – The pod where the log was created. !!! Warning + Long-term storage and management of logs is outside the operator's purview, and needs to be provided at the level of the Kubernetes installation. See the @@ -24,6 +25,7 @@ Each log entry has the following fields: documentation. !!! Info + If your log ingestion system requires it, you can rename the `level` and `ts` field names using the `log-field-level` and `log-field-timestamp` flags of the operator controller. Edit the `Deployment` definition of the `cloudnative-pg` operator. @@ -91,6 +93,7 @@ To enable this support, add the required `pgaudit` parameters to the `postgresql section in the configuration of the cluster. !!! Important + You need to add the PGAudit library to `shared_preload_libraries`. EDB Postgres for Kubernetes adds the library based on the presence of `pgaudit.*` parameters in the postgresql configuration. @@ -101,6 +104,7 @@ The operator also takes care of creating and removing the extension from all the available databases in the cluster. !!! Important + EDB Postgres for Kubernetes runs the `CREATE EXTENSION` and `DROP EXTENSION` commands in all databases in the cluster that accept connections. @@ -181,7 +185,7 @@ for more details about each field in a record. ## EDB Audit logs Clusters that are running on EDB Postgres Advanced Server (EPAS) -can enable [EDB Audit](https://www.enterprisedb.com/docs/epas/latest/epas_guide/03_database_administration/05_edb_audit_logging/) as follows: +can enable [EDB Audit](/epas/latest/epas_security_guide/05_edb_audit_logging/) as follows: ```yaml apiVersion: postgresql.k8s.enterprisedb.io/v1 @@ -264,7 +268,7 @@ See the example below: } ``` -See EDB [Audit file](https://www.enterprisedb.com/docs/epas/latest/epas_guide/03_database_administration/05_edb_audit_logging/) +See EDB [Audit file](/epas/latest/epas_security_guide/05_edb_audit_logging/) for more details about the records' fields. ## Other logs From ffebb77db0ba4c3412396c5436d7efaccb0a6335 Mon Sep 17 00:00:00 2001 From: Josh Heyer Date: Tue, 30 Jul 2024 03:30:51 +0000 Subject: [PATCH 12/15] Link fixes --- .../ai-ml/using-tech-preview/index.mdx | 8 +- .../console/agent/install-agent.mdx | 2 +- .../overview/guide-and-getting-started.mdx | 2 +- .../overview/overview-and-concepts.mdx | 2 +- .../playground/1/01_examples/link-test.mdx | 2 - .../community-archive/contributing.mdx | 68 ------- .../community-archive/planning/demo.mdx | 169 ------------------ .../community-archive/planning/guide.mdx | 52 ------ .../community-archive/planning/howto.mdx | 55 ------ .../community-archive/planning/index.mdx | 62 ------- .../community-archive/planning/robustness.mdx | 35 ---- .../community-archive/planning/tutorial.mdx | 58 ------ .../community-archive/style/code.mdx | 26 --- .../community-archive/style/images.mdx | 11 -- .../community-archive/style/index.mdx | 78 -------- .../community-archive/style/style.mdx | 34 ---- .../security/advisories/cve202331043.mdx | 2 +- .../security/assessments/cve-2024-0985.mdx | 4 +- .../administering_cluster/projects.mdx | 2 +- .../creating_a_cluster/index.mdx | 2 +- .../getting_started/managing_cluster.mdx | 4 +- .../release/getting_started/overview.mdx | 4 +- .../preparing_gcp/index.mdx | 2 +- .../release/overview/03_security/index.mdx | 2 +- .../01_connecting_from_azure/index.mdx | 2 +- .../02_connecting_from_aws/index.mdx | 2 +- .../connecting_from_gcp/index.mdx | 2 +- .../42.5.4.2/installing/windows.mdx | 2 +- .../migration_portal/4/known_issues_notes.mdx | 2 +- .../migration_toolkit/55/installing/macos.mdx | 2 +- .../55/installing/windows.mdx | 2 +- .../odbc_connector/13/installing/windows.mdx | 2 +- .../odbc_connector/16/installing/windows.mdx | 2 +- .../8/considerations/setup_ha_using_efm.mdx | 18 +- .../docs/pem/8/installing/windows/index.mdx | 2 +- .../08_pem_define_aws_instance_connection.mdx | 2 +- .../05_pem_agent_privileges.mdx | 2 +- .../21_performance_diagnostic.mdx | 2 +- .../performance_diagnostic.mdx | 2 +- .../docs/pgbouncer/1/installing/windows.mdx | 2 +- .../docs/pgd/3.7/harp/03_installation.mdx | 2 +- product_docs/docs/pgd/4/bdr/catalogs.mdx | 8 +- product_docs/docs/pgd/4/bdr/functions.mdx | 2 +- product_docs/docs/pgd/4/deployments/index.mdx | 2 +- .../docs/pgd/4/harp/03_installation.mdx | 2 +- .../pgd/5/appusage/table-access-methods.mdx | 2 +- .../5/consistency/column-level-conflicts.mdx | 4 +- .../pgd/5/ddl/ddl-pgd-functions-like-ddl.mdx | 2 +- .../docs/pgd/5/planning/choosing_server.mdx | 6 +- .../quickstart/further_explore_conflicts.mdx | 4 +- .../pgd/5/quickstart/quick_start_linux.mdx | 2 +- .../pgd/5/reference/conflict_functions.mdx | 4 +- .../docs/pgd/5/reference/pgd-settings.mdx | 2 +- .../reference/streamtriggers/rowfunctions.mdx | 2 +- .../docs/pgd/5/routing/installing_proxy.mdx | 2 +- product_docs/docs/pgd/5/security/roles.mdx | 2 +- .../1/architecture.mdx | 4 +- .../1/backup.mdx | 12 +- .../1/installation_upgrade.mdx | 2 +- .../1/openshift.mdx | 6 +- .../1/recovery.mdx | 2 +- .../reference/tpaexec-download-packages.mdx | 2 +- 62 files changed, 83 insertions(+), 727 deletions(-) delete mode 100644 advocacy_docs/playground/community-archive/contributing.mdx delete mode 100644 advocacy_docs/playground/community-archive/planning/demo.mdx delete mode 100644 advocacy_docs/playground/community-archive/planning/guide.mdx delete mode 100644 advocacy_docs/playground/community-archive/planning/howto.mdx delete mode 100644 advocacy_docs/playground/community-archive/planning/index.mdx delete mode 100644 advocacy_docs/playground/community-archive/planning/robustness.mdx delete mode 100644 advocacy_docs/playground/community-archive/planning/tutorial.mdx delete mode 100644 advocacy_docs/playground/community-archive/style/code.mdx delete mode 100644 advocacy_docs/playground/community-archive/style/images.mdx delete mode 100644 advocacy_docs/playground/community-archive/style/index.mdx delete mode 100644 advocacy_docs/playground/community-archive/style/style.mdx diff --git a/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/index.mdx b/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/index.mdx index 981d119b782..1b413181cd8 100644 --- a/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/index.mdx +++ b/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/index.mdx @@ -5,12 +5,12 @@ description: Using the EDB Postgres AI AI/ML tech preview to build a simple retr navigation: - working-with-ai-data-in-postgres - working-with-ai-data-in-s3 -- standard-encoders +- additional_functions --- -This section shows how you can use your [newly installed pgai tech preview](install-tech-preview) to retrieve and generate AI data in Postgres. +This section shows how you can use your [newly installed pgai tech preview](../install-tech-preview) to retrieve and generate AI data in Postgres. * [Working with AI data in Postgres](working-with-ai-data-in-postgres) details how to use the pgai extension to work with AI data stored in Postgres tables. -* [Working with AI data in S3](working-with-ai-data-in-s3) covers how to use the pgai extension to work with AI data stored in S3 compatible object storage. -* [Standard encoders](standard-encoders) goes through the standard encoder LLMs that are supported by the pgai extension. +* [Working with AI data in S3](working-with-ai-data-in-S3) covers how to use the pgai extension to work with AI data stored in S3 compatible object storage. +* [Additional functions](additional_functions) goes through the standard encoder LLMs and other functions that are supported by the pgai extension. diff --git a/advocacy_docs/edb-postgres-ai/console/agent/install-agent.mdx b/advocacy_docs/edb-postgres-ai/console/agent/install-agent.mdx index c10c8f73d72..0bb9078f16f 100644 --- a/advocacy_docs/edb-postgres-ai/console/agent/install-agent.mdx +++ b/advocacy_docs/edb-postgres-ai/console/agent/install-agent.mdx @@ -75,7 +75,7 @@ Create a Beacon configuration directory in your home directory: mkdir ${HOME}/.beacon ``` -Next, configure Beacon Agent by setting the access key (the one you obtained while [Creating a machine user](create_machine_user)) and project ID: +Next, configure Beacon Agent by setting the access key (the one you obtained while [Creating a machine user](create-machine-user)) and project ID: ``` export BEACON_AGENT_ACCESS_KEY= diff --git a/advocacy_docs/edb-postgres-ai/overview/guide-and-getting-started.mdx b/advocacy_docs/edb-postgres-ai/overview/guide-and-getting-started.mdx index 95d88aebbf6..7dbfe56301f 100644 --- a/advocacy_docs/edb-postgres-ai/overview/guide-and-getting-started.mdx +++ b/advocacy_docs/edb-postgres-ai/overview/guide-and-getting-started.mdx @@ -26,4 +26,4 @@ You'll want to look at the [EDB Postgres® AI Platform Agent](/edb-postgres-ai/c ## Do you want to know more about the EDB Postgres AI Cloud Service? -You'll want to look at the [EDB Postgres® AI Cloud Service](/edb-postgres-ai/databases/cloudservice) documentation, which covers the Cloud Service and its databases. +You'll want to look at the [EDB Postgres® AI Cloud Service](/edb-postgres-ai/cloud-service) documentation, which covers the Cloud Service and its databases. diff --git a/advocacy_docs/edb-postgres-ai/overview/overview-and-concepts.mdx b/advocacy_docs/edb-postgres-ai/overview/overview-and-concepts.mdx index 5801bc7e378..259554ceeb6 100644 --- a/advocacy_docs/edb-postgres-ai/overview/overview-and-concepts.mdx +++ b/advocacy_docs/edb-postgres-ai/overview/overview-and-concepts.mdx @@ -38,7 +38,7 @@ All of these components are available on the EDB Postgres AI Cloud Service, and ## [EDB Postgres AI Lakehouse analytics](/edb-postgres-ai/analytics) Filtering out the data noise and revealing insights and value, Lakehouse analytics brings both structured relational data in Postgres and unstructured data in object storage together for exploration. -* **[Lakehouse nodes](/edb-postgres-ai/analytics/lakehouse)** +* **[Lakehouse nodes](/edb-postgres-ai/analytics/concepts/)** * At the heart of Analytics is custom-built object storage for your data. Built to bring structured and unstructured data together, Lakehouse nodes support numerous formats to bring cold data in, ready for analysis. ## [EDB Postgres AI AI/ML](/edb-postgres-ai/ai-ml) diff --git a/advocacy_docs/playground/1/01_examples/link-test.mdx b/advocacy_docs/playground/1/01_examples/link-test.mdx index 5815e78e1ff..41bc90f68e6 100644 --- a/advocacy_docs/playground/1/01_examples/link-test.mdx +++ b/advocacy_docs/playground/1/01_examples/link-test.mdx @@ -51,8 +51,6 @@ expect if there were no alterations to the original filesystem (i.e., path ended [//www.google.com](//www.google.com) -[ftp://user:password@host:port/path](ftp://user:password@host:port/path) - ## Rationale for our weird-ass URL rewriting Ok, so here's the deal: links are written in these files while looking at a filesystem heirarchy. Then, that heirarchy is transformed into a *similar* but *different* heirarchy. And those links, even with relative paths in them, still gotta work. diff --git a/advocacy_docs/playground/community-archive/contributing.mdx b/advocacy_docs/playground/community-archive/contributing.mdx deleted file mode 100644 index d9a42f95bf9..00000000000 --- a/advocacy_docs/playground/community-archive/contributing.mdx +++ /dev/null @@ -1,68 +0,0 @@ ---- -title: Contribute -description: Guidelines for contributing to EDB Docs. -iconName: Developer ---- - -*Note: per [Github convention](https://help.github.com/en/github/building-a-strong-community/setting-guidelines-for-repository-contributors), this content may move this project to a CONTRIBUTING.md file in this project's root directory.* - -Got information that'd useful as a part of these docs? A link that's relevant to a topic here? An idea for a demo? Spot something that could be improved? We welcome all contributors! - -## Thanks for contributing! - -It's folks like you, working together, that make PostgreSQL such a great community to be a part of. We appreciate everyone's effort and time to help us make this documentation project the best that it can be, for people new to PostgreSQL, EDB customers, the good people who support those customers, and anyone else who uses PostgreSQL in anger and with love. - -## How can I contribute? - -There are lots of ways to contribute, and all of them involve Github. We love Github. For detailed instructions on pulling and building this repository locally, see [Authoring](/community/authoring) - -### Fixing problems - -Spot a typo? A bug in a code sample? Awkward or misleading phrasing? Submit a correction! - -1. Click the "Edit this page" link at the top-right of the page where you spotted the problem. This will take you to GitHub (you may need to log in first). -2. Click the "Fork this project" button to create your own working copy of the docs. GitHub will open the relevant [MDX file](https://www.gatsbyjs.org/docs/mdx/writing-pages/) for editing. -3. Make whatever corrections you see fit, then describe the reason for the changes in the text fields at the bottom of the page. -4. Click "Propose changes" - GitHub create a branch containing your change and show you a diff. At this point you can make further changes in your patch branch, or - if the changes are sufficient - submit a pull request for us to review. -5. Click "Create pull request" - GitHub will give you another chance to describe the reason for your changes; once you're satisfied, click "Create pull request" again. -6. We'll be notified of the pull request - if we have any questions or concerns, we'll comment on it, when it looks good we'll merge it in and build out the changes! - -### Report bugs - -See a problem, but not sure how it should be fixed? [Create an Issue on GitHub!](/community/feedback) - -### Share ideas - -The best way to share ideas for how we can improve EDB Docs is to use [Github Issues](/community/feedback). Please be open to discussion - the best ideas come from collaboration! - -### Share knowledge - -Have ideas for improving an existing topic? Maybe some examples to add, a better demonstration of a concept or technique, or corrections & clarifications? You can make a pull request (as described above) using the "Edit this page" link - we'll review your changes & merge them into the live site! - -Have ideas for a *new* topic? Awesome! You can use a pull request for that too, but first let's talk about writing... - -## Writing topics - -You've got the knowledge and expertise to help - let's work together to share that with folks learning PostgreSQL! - -An effective topic reflects the needs of a specific group of people, its "**audience**". As a person progresses in learning and using PostgreSQL, their needs change and so do the topics that they look to for assistance: - -- **Starting out:** topics that introduce terminology and foundational concepts - *Formats:* [Tutorials](/community/contribute/planning/tutorial), [Live Demos](/community/contribute/planning/demo) -- **Digging in:** topics that solve real problems - *Formats:* [HOWTOs](/community/contribute/planning/guide) -- **Stepping back:** topics that explain how individual tools and concepts fit into the larger system - *Formats:* [Guides](/community/contribute/planning/guide) -- **Retrieving:** Topics that allow fast retrieval of detailed information on specific syntax, settings, capabilities, workflows. - *Formats:* Reference documentation, [Guides](/community/contribute/planning/guide) - -This is a simplification of course: one might be comfortably familiar with the practical tasks involved in configuring a system while still digging in when it comes to tasks such as querying or scaling; we should never assume that a beginner in one area is a beginner in *all* areas! - -So, **start by identifying your audience**, thinking about *who* you're writing for in addition to *what* you're writing about. These together will determine *how* you write! - -Once that's done, - -1. [Plan your article](/community/contribute/planning) -2. [Write it up using Markdown / MDX](/community/contribute/style) -3. [Test it!](/community/contribute/planning/robustness) -4. [Submit a pull request](/community/authoring#how-to-make-changes-and-submit-pull-requests) diff --git a/advocacy_docs/playground/community-archive/planning/demo.mdx b/advocacy_docs/playground/community-archive/planning/demo.mdx deleted file mode 100644 index 665aa2ab689..00000000000 --- a/advocacy_docs/playground/community-archive/planning/demo.mdx +++ /dev/null @@ -1,169 +0,0 @@ ---- -title: Live Demo Guidelines -navTitle: Live Demo -description: Tips and guidance for constructing live demos (Katacoda) -tags: - - documentation - - contributing ---- - - - -## Start with something you want to demonstrate - -The Katacoda format is ideal for step-by-step tutorials, but can also be used for demonstrating ad-hoc code or commands. - -Write this out in Markdown format, and spend some time testing it out; Katacoda builds can take a long time, you don't want to have to spend a lot of time waiting on them to fix problems! - -## Decide on the format - -There are two ways of embedding Katacoda: - -1. Panel embeds ([example](/getting-started/installing_postgres/docker)) pop up a console at the bottom of the page, and allow execution of commands embedded within the rest of the page. - These allow a lot of flexibility and don't tie you to a strict step-by-step format - anything you want to demo that can be executed in the console can be included anywhere on the page. -2. Multi-pane embeds ([example](/getting-started/connecting_to_postgres/java/02_JDBC)) put the entire Katacoda environment inline on the page. They can thus include the file browser, editor, etc. as-needed. - Interaction with the rest of the page is not really practical for this format. - -The crucial decision here will generally boil down to whether or not you need an editor. Sure, you can use vim in the console... But that adds a certain level of friction for many readers. **So, if you need the reader to edit files, use the multi-pane embed; otherwise, go for the panel embed.** - -## Panel embed format - -1. [Pull the repository](https://github.com/rocketinsights/edb_docs_advocacy) and create a local working branch - -2. Take your Markdown-formatted tutorial and [add the necessary Frontmatter metadata](../style) - -3. Add a `katacodaPanel` key to the Frontmatter at the top of the page. The value will be key-value pairs for the account and scenario that will be used, along with the language(s) that will be executed in the terminal: - - ```yaml - katacodaPanel: - account: enterprisedb - scenario: sandbox - codelanguages: sql - ``` - The scenario attribute defines the environment that will be used; `sandbox` is a custom PostgreSQL-on-Ubuntu image that Dave put together with the Pagila example database pre-installed, suitable for demonstrating SQL and some management. The property `codelanguages` specifies a comma-separated list of language names: code blocks highlighted in these languages will execute in the terminal when clicked. Specify the highlighting language for a code block next to the opening fence, e.g. - - ~~~ - ```sql - Select * From films; - ``` - ~~~ - - Note: in some cases, you won't need a scenario, just a base environment (for example, demonstrating installation on Ubuntu needs only an Ubuntu environment). In these cases, omit the `account` property and specify the name of the environment in the `scenario` property. If no `codelanguages` value is specified, then code blocks marked with `shell` will be executable. - - ```yaml - # panel defintion suitable for demonstrating shell commands on ubuntu - katacodaPanel: - scenario: ubuntu1804 - ``` - -4. Make sure code blocks are marked with the language represented in them (sql, shell, etc) - -5. Add a `` element - this will render a button that will allow the reader to load Katacoda in the page, based on the definition provided in the page's Frontmatter in step #3: - - ```markdown - This is an interactive tutorial - you may launch a console - in your browser to run the examples below. - - ``` - -6. Drop your file (with an mdx extension) in the relevant section of this repository - -7. Test locally - see README for local build instructions - -8. Commit, push your branch to remote, and create a PR - -9. Let the remote branch build, and test that - -10. Merge - -11. View it live! - -## Multi-pane embed format - -1. Pull [the Katacoda repository](https://github.com/EnterpriseDB/katacoda-scenarios) (for experimentation, you can create your own) - -2. Use [the katacoda-cli tool](https://www.katacoda.com/cli) to create a new scenario - - ``` - katacoda scenario:create - ``` - - Follow the prompts, and refer to the tutorial you started with for the number of steps you'll want. - -3. The tool generates a .md file for each step and an `index.json` file to tie them all together: copy each step into its respective .md file, and edit the index.json to give them descriptive names. - - *You can also create custom environments by adding build scripts under the ./environments directory. See [the Katacoda docs](https://www.katacoda.community/custom-environment.html) for details. If using a custom environment, reference it from the backend.imageid value in `index.json`. - -4. **Special markup** - - - To make code blocks clickable to execute in the terminal, add `{{execute}}` to the block's closing: - - ~~~ - ```shell - ls /etc - ```{{execute}} - ~~~ - - - To allow a code block to be copied to the clipboard, add `{{copy}}` to the blocks - - ~~~ - ```python - print('hello') - ```{{copy}} - ~~~ - - - To allow the reader to open a file in the editor, use inline code formatting and add `{{open}}` to the end: - - ~~~ - `index.json`{{open}} - ~~~ - - - To allow the reader copy an entire code block into the editor, replacing its current contents, with a single click... Wrap it in a specially-formatted `
` element:
-
-     ```
-     
-
-     Cheese.say();
-
-     
- ``` - Note: to avoid having to escape the contents, ensure there is a blank line at the start and end. If this is unworkable, ensure the contents are HTML-safe (ampersands, less-than signs escaped). - -5. Commit changes & push to Katacoda repo - - (then wait for it to build) - -6. [Pull the repository](https://github.com/rocketinsights/edb_docs_advocacy) and create a local working branch - -7. Create article page (see [Format](../style) for details) - - Add a `kataCodaPages` key in the Frontmatter at the top of the page. The value will be a list of Katacoda scenario names with an associated Katacoda account for each. - - ```yaml - katacodaPages: - - scenario: install-ubuntu - account: enterprisedb - - scenario: java-jdbc - account: shog9 - ``` - - The scenario name will be used to refer to the scenario later in the page, *and* will be used in the path for the scenario page within the site; for this reason, avoid using the same scenario name across different accounts on the same page (this should never be necessary). - - Once you've defined the relevant scenarios, you'll link to them within the article text using the `KatacodaPageLink` element: - - ```markdown - - ``` - - Include at least a short into / explanation for the link. A good place to start is the description defined in the scenario itself. Ex: - - > This tutorial demonstrates how to connect to an existing PostgreSQL database from Java using JDBC. - - Name the article with an .mdx extension and put it in the relevant section of this repository - -8. Test locally - see README for local build instructions - -9. Commit to branch, push and create PR - -10. Live! diff --git a/advocacy_docs/playground/community-archive/planning/guide.mdx b/advocacy_docs/playground/community-archive/planning/guide.mdx deleted file mode 100644 index f8cba441408..00000000000 --- a/advocacy_docs/playground/community-archive/planning/guide.mdx +++ /dev/null @@ -1,52 +0,0 @@ ---- -title: Guide Guidelines -navTitle: Guides -description: Tips and guidance for constructing Guides -tags: - - documentation - - contributing ---- - -### Preface: Usage -The distinction in these guides between Guides, [HOWTOs](howto) and [Tutorials](tutorial) is drawn for the benefit of the authors and contributors; readers by and large will not care so long as they get the help they're looking for. Use your knowledge of your target audience to choose the format for your article, and title it according to the information it provides. - -## The goal of a Guide - -Provide knowledge gained by experience to the reader, guiding them toward understanding the design and limitations of the system, tool or concept. - -A Guide should be written as instructions on how to *think* about a given system, in contrast to a HOWTO or Tutorial which aim to teach the reader how to *use* the system. - -## The nature of your target audience - -You would write a Guide when your target audience has some hands-on experience with a given tool or platform, and desires a deeper understanding. For example, someone who finds themselves working with PostgreSQL indexes on a regular basis and wishes to gain the ability to intuit their effects on performance might seek out material that explains the purpose of various index types, their design goals, limitations, and contrasting storage requirements. - -## Tips - -1. **Structure your outline** to avoid unnecessary backtracking; try to lay out each feature, problem or idea such that it builds on previously-introduced features, problems or ideas. It is easier to retain a narrative than a jumbled collection of ideas. -2. **Identify prerequisites up-front,** provide links to other guides, reference material, HOWTOs, etc. A reader who hasn't yet become proficient in the topic may wish to refer to them first, or in parallel with your article. -3. **Highlight the problem it solves** when introducing a new feature, function or idea. For example, "Range queries can be slow on b-tree indexes; BRIN indexes were created to speed up range query performance while reducing index size". This helps readers more easily recall the information when they most need it! - -## Testing - -- Build a collection of support questions concerned with the subject of the article, and compare them to the information you've presented: - 1. Is the point of confusion underlying the question addressed in the guide? - 2. Does the guide's language and terminology mirror that of the askers'? - 3. Are the general problems faced by the askers called out in the explanations included in the guide? - 4. Could the guide be cited in answers to the questions without becoming gratuitous? - Add or adjust explanations to better match what you observe. -- Discuss the guide with support people and other experts in the subject. Adjust recommendations made by the guide to reflect practical concerns. -- Compare each assertion in the guide to the canonical documentation. Correct any errors or omissions. This becomes increasingly important as the guide ages. - -## Usage - -- Reference the Guide from HOWTOs and Tutorials covering the same tools, systems or platforms. -- Include as prerequisite reading for more advanced Guides. -- Reference the Guide when answering support questions. - - - -## Further reading - -- [Planning an article](.) -- [Writing Style and Approach](../style/style) -- [Creating and Maintaining a Robust Article](robustness) diff --git a/advocacy_docs/playground/community-archive/planning/howto.mdx b/advocacy_docs/playground/community-archive/planning/howto.mdx deleted file mode 100644 index b3463e15d29..00000000000 --- a/advocacy_docs/playground/community-archive/planning/howto.mdx +++ /dev/null @@ -1,55 +0,0 @@ ---- -title: HOWTO Guidelines -navTitle: HOWTOs -description: Tips and guidance for constructing HOWTO articles -tags: - - documentation - - contributing ---- - -### Preface: Usage -The distinction in these guides between [Guides](guide), HOWTOs and [Tutorials](tutorial) is drawn for the benefit of the authors and contributors; readers by and large will not care so long as they get the help they're looking for. Use your knowledge of your target audience to choose the format for your article, and title it according to the information it provides. - -There is some small value in *calling* a HOWTO a tutorial, as this is a common search term. - -## The goal of a HOWTO article - -Walk a reader through the process of accomplishing **a practical task**, the outcome of which is of immediate use to them. A reader may seek out a HOWTO to remind them of the steps needed to complete an infrequently-performed task, or to provide safety when undertaking a complex task which they've never had to take on before. - -The format of a HOWTO is similar to that of a [Tutorial](tutorial), but a tutorial need not produce any useful outcome beyond that of increasing the reader's familiarity with the subject matter. - -If you were cooking breakfast, you might refer first to a tutorial to learn how to crack eggs, and then a HOWTO to learn the steps involved in making a crepe. - -## The nature of your target audience - -You would choose to write a HOWTO when your target audience is already familiar with the platform and terminology, but not the steps involved in the specific task they wish to accomplish. - -## Tips - -1. **Perform the task yourself:** taking notes on each action you perform, how long it takes you, and the final result. If you find yourself looking up syntax for commands or other references while performing the task, make notes on these resources as well. -2. **Establish prerequisites:** Identify the desired outcome, prerequisite knowledge and estimated time to complete at the start of your article. Provide links to tutorials, guides or other resources for those who may lack the required familiarity with the target platform. -3. **Record the actions:** each discrete action should be a step in the HOWTO. Keep each step compact so that they're easy to pick out for a distracted reader. -4. **Describe the steps:** Be succinct, so as not to distract from the action itself. Link to additional reading (reference or [guide](guide)) for actions that may not be familiar to the audience. -5. **Highlight potential errors:** It won't be possible to anticipate *every* error, but when a step is known to fail in common situations, call this out and provide a work-around. -6. **Provide links to additional reading material:** Start with any references you used yourself in #1, add other material you linked from step descriptions. - -## Testing - -1. Work through the steps yourself, carefully taking no actions that aren't explicit in the article. Make corrections to address any errors or omissions you encounter. -2. Ask a friend or colleague to work through the HOWTO; observe them quietly if possible, noting any friction points, errors, or dead-ends. - -In both cases, compare the final result of the test to the result you obtained when you started writing. If there are differences, try to figure out how they arose and adjust the steps as needed. - -See also: [Creating and Maintaining a Robust Article](robustness) - -## Usage - -- Reference HOWTOs in answers to support questions -- Link to HOWTOs from articles on related topics - start with any guides or tutorials you included in your "Further reading" section. - -## Further reading - -- [Planning an article](.) -- [Writing Style and Approach](../style) -- [Creating and Maintaining a Robust Article](robustness) -- [Tutorial Guidelines](tutorial) diff --git a/advocacy_docs/playground/community-archive/planning/index.mdx b/advocacy_docs/playground/community-archive/planning/index.mdx deleted file mode 100644 index d933d6cf479..00000000000 --- a/advocacy_docs/playground/community-archive/planning/index.mdx +++ /dev/null @@ -1,62 +0,0 @@ ---- -title: Planning an article -navTitle: Planning -description: Tips and guidelines for preparing to write an article -tags: - - documentation - - contributing ---- - -So, you'd like to contribute an article to this repository? That's awesome! Below are a few tips that you might find useful. These are all just guidelines - use what you can, ignore what you cannot; you're here to share your knowledge, and we're here to help you do that! - -## Core philosophy - -This repository exists to make PostgreSQL easier to learn and use for people from myriad different backgrounds, possessing different levels of knowledge and understanding. Each article assumes a prerequisite level of knowledge and sets out to assist the reader in gaining more - thus allowing them to solve new problems, perform new tasks, and understand more advanced articles. - -No one article can or should be equally useful to all people; each of us possesses unique sets of knowledge - you can be an expert in many things and still know nothing about many more. Every article should be written with the goal of [telling *someone* about *something* for the very first time](https://xkcd.com/1053/)! - -## Identifying your audience - -Every article has an intended audience, a group of 1 or more people that you, the author, wish to help. You probably don't know *everyone* in that group, but you've certainly interacted with at least a few of its members - folks you've worked with, talked with, or assisted in some way. Try to identify them, and keep them firmly in mind while writing; doing so will help you to stay focused, help you to avoid falling into the trap of writing an article that *no actual person needs to read!* - -In particular, try to identify: - -1. **What your audience needs to know, and why they need it.** What problems does this information allow them to solve? What new tasks will it allow them to perform? Which new opportunities for learning will understand the concepts you present open for them? -2. **What your audience *already* knows.** There is a limit to how much you can convey in the space of a single article. By identifying the prerequisite knowledge up-front, you can guide readers who lack it to other resources, while providing encouragement to those who already possess the necessary background information that your article will suit them. - -Your audience's needs should determine the nature of the article you write. If they're... - -- ...Just getting started? [Write a Tutorial](tutorial) or a [Live Demo](demo) -- ...Seeking to perform a specific task? [Write a HOWTO](howto) -- ...Wanting to better understand a tool, system or concept? [Write a Guide](guide) - -## Hang on to what you learned - -The topic you're presenting was new to you at one point too... You may not remember every struggle you encountered along the way, but at least you may know what you've forgotten and had to look up again while preparing to write! Hang on to that, keep notes on resources you drew on to refresh your memory, new things you learned along the way, and anything that struck you as unexpectedly delightful or interesting - chances are, your readers will appreciate them too. - -## Start with an outline - -Even if your article seems fairly linear, an outline is useful to quickly present the summary, structure, and a rough idea of length. The system used to present these articles provides an outline in the form of the table of contents, built from the headings used within the document - as illustrated on this very page! - -Starting with an outline can also help avoid mistakes, such as omitting important steps or sub-topics. And if you find your outline is growing unwieldy, you may wish to consider breaking it up into multiple articles. - -## Create a test project - -Whenever possible, the code and instructions presented in an article should be tested - both prior to publication, and regularly thereafter. Few things are more frustrating to a reader than diligently following instructions only to face an error! - -Create a project that allows you to test each action described in your article, and use it (or recreate it) regularly as you write and test. For more on testing, see: [Creating and Maintaining a Robust Article](/community/contribute/planning/robustness). - -## Keep notes on related and more advanced topics - -Your article should not be a dead-end! Keep a list of related topics that you encounter as you write; work links into the text where immediately relevant, or include them in a "Further reading" section at the end of your article. - -## Don't let the perfect become the enemy of the good - -Every article here, including this one, is a living document: mistakes can be corrected, new information added, obsolete information removed. There's an "Edit this page" button at the top here, and there will be on your article as well - so don't worry about getting it perfect the first time, as you and others will be able to return to it as need-be. What's important is that we're able to work together and help someone learn. - -## Further reading - -- [Writing Style and Approach](/community/contribute/style/style) -- [Creating and Maintaining a Robust Article](/community/contribute/planning/robustness) - - diff --git a/advocacy_docs/playground/community-archive/planning/robustness.mdx b/advocacy_docs/playground/community-archive/planning/robustness.mdx deleted file mode 100644 index a0b1f1265fe..00000000000 --- a/advocacy_docs/playground/community-archive/planning/robustness.mdx +++ /dev/null @@ -1,35 +0,0 @@ ---- -title: Creating and Maintaining a Robust Article -navTitle: Robustness -description: Tips for writing and modifying articles such that they can be tested and kept up-to-date -tags: - - documentation - - contributing ---- - -No matter how useful an article might be when first written, there will be mistakes... And over time, information will become obsolete, inaccurate, or misleading. This is unavoidable - but we can make identifying and fixing problems easier! - -## Test all code and commands - -All commands included in an article should be *run*; all code should be compiled and/or run. And the preceding text and code should be sufficient to allow someone else to create an environment in which to run them, successfully. If, at some later date, the code or command no longer runs successfully then it will be obvious that the article must be updated. - -## Capture expected output - -This is useful anyway: a reader who wishes to know if their command or code executed successfully can compare their results to those in the article. And it serves double-duty for maintenance: if the output changes, then the article is out of date. - -For [live demos](demo) there's a third advantage: programmatically verifying that a step has been completed successfully before moving on to the next step. - -## Include a link to a repository or project that contains the final, working code - -This is primarily relevant for tutorials and HOWTOs: when the goal of the article is to allow the reader to produce a certain result, that result can be produced ahead of time: if it no longer functions, then either the repository is out of date, the article is out of date, or both are out of date. - -Make sure to include a README file with the repository or project that links to the article's source, so that both can be updated at once when bugs are identified. - -### Verify that the repository can be created from the instructions provided in the article - -If you *started* with a test project, and then wrote a tutorial or HOWTO to document the steps required to create that, you should follow the steps in the article. Compare those results to what you started with, and ensure that what you're providing to readers matches what they can reasonably be expected to produce on their own! - -## Ask someone else to test - -It's hard to see your own blind spots - so ask someone else to test! A friend, co-worker, ideally someone in your target audience... Another set of eyes to help you see what your work looks like to others. If they can't successfully reproduce the results, then either the instructions are incorrect, or could use some clarification. - diff --git a/advocacy_docs/playground/community-archive/planning/tutorial.mdx b/advocacy_docs/playground/community-archive/planning/tutorial.mdx deleted file mode 100644 index 60d6f4e237f..00000000000 --- a/advocacy_docs/playground/community-archive/planning/tutorial.mdx +++ /dev/null @@ -1,58 +0,0 @@ ---- -title: Tutorial Guidelines -navTitle: Tutorials -description: Tips and guidance for constructing Tutorials -tags: - - documentation - - contributing ---- - -### Preface: Usage -The distinction in these guides between [Guides](guide), [HOWTOs](howto) and Tutorials is drawn for the benefit of the authors and contributors; readers by and large will not care so long as they get the help they're looking for. Use your knowledge of your target audience to choose the format for your article, and title it according to the information it provides. - -There is some small value in *calling* a tutorial a tutorial, as this is a common search term. - -## The goal of a Tutorial - -Walk through the steps needed to accomplish some simple task in a system, introducing core concepts and terms along the way. A reader may seek out a tutorial to gain a basic understanding of a system before delving into more practical or advanced topics. They may even scan through a tutorial when evaluating a product, as a way to gauge the difficulty involved in using it. - -Unlike a [HOWTO](howto), the end result of completing a Tutorial need not be of any practical use beyond that of providing the reader familiarity with the system and its terminology. - -Both Tutorials and HOWTOs are rote learning, akin to memorizing multiplication tables, playing scales, or following a recipe in a cookbook. They are a poor way of learning, but for people who are new to a topic they allow the learner to quickly demonstrate basic competency. By providing a foundation on which to build, a good tutorial enables self-learning! - -## The nature of your target audience - -You would choose to write a Tutorial when your target audience has little or no past exposure to a given tool or platform and wishes to quickly gain enough familiarity to evaluate or research it. - -## Tips - -1. **Keep it short:** choose small tasks that can be accomplished in minutes, not hours. Err on the side of simplicity over practicality - remember, the goal is to introduce concepts not necessarily accomplish anything of practical use, though if you can do both so much the better! -2. **Perform the task yourself:** taking notes on each action you perform, how long it takes you, and the final result. If you find yourself looking up syntax for commands or other references while performing the task, make notes on these resources as well. -3. **Minimize the prerequisites** required to work through the tutorial, and note them explicitly at the top along with links to further reading. For example, a tutorial on basic SQL syntax should not require familiarity with `psql` commands or relational algebra, but may require the PostgreSQL client to be installed, a database to be configured, and a working knowledge of the bash shell. -4. **Walk through the initial setup carefully**, introducing terms and tools as they're used. Always use correct terminology for your subject matter, and link to further information on each term or tool: the reader is here to learn, not to be patronized. -5. **Record the actions:** each logically-connected group of actions should be a step in the Tutorial. For example, a many-to-many relationship might involve creating several tables - include them all in a step so that you're able to explain the concept and the reader is able to associate each action with the term and result. -6. **Describe the steps:** Detail the work to be done before presenting the action. Avoid jargon that is not essential to describing the step, and take time to define new terminology upon first use. Link to additional reading (reference or [guide](guide)) for concepts, terms and actions that may not be familiar to the audience. -7. **Keep it short, but avoid "[lies to children](https://en.wikipedia.org/wiki/Lie-to-children)"**, or otherwise-inaccurate explanations that will trip them up later; again, do not patronize the reader. In situations where you cannot fully explain a term or concept in a paragraph or two, note that your explanation is an approximation and provide the reader with a link for further study. -8. **Provide links to additional reading material:** Start with any references you used yourself in #1, add other material you linked from step descriptions. - -## Testing - -- Walk through the steps yourself, doing NOTHING that isn't spelled out as an action in the Tutorial. Make note of any errors or omissions and correct them. -- Skip steps or actions and note the resulting errors; if the cause isn't obvious, consider adding a note to the subsequent step that will allow the reader to recognize the error if they encounter it. -- Ask a friend or colleague to work through the Tutorial, ideally someone in your target audience. Observe them quietly if possible, and note all mistakes or errors encountered. Revise the steps to mitigate such errors. - -In all cases, compare the final result of the test to the result you obtained when you started writing. If there are differences, try to figure out how they arose and adjust the steps as needed. - -See also: [Creating and Maintaining a Robust Article](robustness) - -## Usage - -- Tutorials can be a good starting point: reference them on introductory pages, and ensure they can be found by search engines. -- Link to Tutorials from articles on related topics - start with any guides or tutorials you included in your "Further reading" section. - -## Further reading - -- [Planning an article](/community/contribute/planning) -- [Writing Style and Approach](../style/style) -- [Creating and Maintaining a Robust Article](robustness) -- [HOWTO Guidelines](howto) diff --git a/advocacy_docs/playground/community-archive/style/code.mdx b/advocacy_docs/playground/community-archive/style/code.mdx deleted file mode 100644 index e6909d15e03..00000000000 --- a/advocacy_docs/playground/community-archive/style/code.mdx +++ /dev/null @@ -1,26 +0,0 @@ ---- -title: Use of Code in Articles -navTitle: Code -description: Tips for using code in articles -tags: - - documentation - - contributing ---- - -- Use Markdown "code fences" along with a language hint to demarcate code: - - ~~~markdown - ```python - import psycopg2 - ``` - ~~~ - - produces: - - ```python - import psycopg2 - ``` -- Avoid very long listings - if most of the code in the listing *isn't* being described in the article text, put it in a repository somewhere and provide a link to it. Then list *just* the code that you're writing about. - -- Try to keep lines short, to avoid horizontal scrolling on average-sized screens. Also, limit your indentation to between 2 and 4 characters when possible. - diff --git a/advocacy_docs/playground/community-archive/style/images.mdx b/advocacy_docs/playground/community-archive/style/images.mdx deleted file mode 100644 index 0d5523cffb3..00000000000 --- a/advocacy_docs/playground/community-archive/style/images.mdx +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Use of Images in Articles -navTitle: Images -description: Tips for using images in articles -tags: - - documentation - - contributing ---- - -- Prefer text when possible: it is readable on more screens, and accessible to more people. -- If an image is important to the article, include alt text which [provides equivalent information](https://writing.codidact.com/help/alt-text) diff --git a/advocacy_docs/playground/community-archive/style/index.mdx b/advocacy_docs/playground/community-archive/style/index.mdx deleted file mode 100644 index b08dab42434..00000000000 --- a/advocacy_docs/playground/community-archive/style/index.mdx +++ /dev/null @@ -1,78 +0,0 @@ ---- -title: Article format and style -navTitle: Format & Style -description: Guide to the formatting and writing style used in articles -tags: - - documentation - - contributing ---- - -Every article here starts with a block of [Frontmatter](https://www.gatsbyjs.com/docs/adding-markdown-pages/#frontmatter-for-metadata-in-markdown-files) metadata (YAML format): - -```yaml ---- -title: #The full title of the article -navTitle: #(optional) 1-2 word title for the navigation sidebar -description: #(optional) summary description for links on the index page -product: #(optional) product the article is about (ex. postgres-advanced-server) -platform: #(optional) the platform which the article is about (ex. ubuntu) -tags: #list of relevant keywords (used for related articles) Ex. - - postgresql - - ubuntu - - psql - - live-demo -katacodaPages: #(optional) list of Katacoda scenarios linked in the page, Ex. - - scenario: install-ubuntu - account: enterprisedb -katacodaPanel: #(optional) definition of the Katacoda scenario to be embedded in the page, Ex. - account: enterprisedb - scenario: sandbox - codelanguages: sql ---- - -``` - -Below the Frontmatter's closing dashes is the MDX (Markdown) formatted text of the article. MDX is based on [CommonMark](https://github.com/mdx-js/specification), which minimizes distracting markup - but JSX components or raw HTML may also be mixed in where needed: - -```markdown -## Introduction - -Briefly introduce the purpose of the article. -Level 2 headings (##) will be linked in the table of contents -generated automatically for the page. - -## Step 1 - -Get into the *actual* information. - -### Subsection - -Break up long runs of text into subsections. -Level 3 headings (###) will *not* appear -in the table of contents. - -## Demo - -MDX allows including and referencing React components -as well, which is handy for things like Katacoda embeds: - - - - - -``` - -## A note on search engines - -Readers may find your article from a search engine. If your article is split into separate pages, keep in mind that readers may view the pages out of order. It might be useful to alert them to important details that previous pages cover. To do so, we recommend using notes at the top of your article where prior context would be useful: - -```markdown -!!! note - Don't forget to [install prerequisite](/link/article) before trying to connect! -``` - -## Further reading - -- [Planning an article](/community/contribute/planning) diff --git a/advocacy_docs/playground/community-archive/style/style.mdx b/advocacy_docs/playground/community-archive/style/style.mdx deleted file mode 100644 index e9b5857a36c..00000000000 --- a/advocacy_docs/playground/community-archive/style/style.mdx +++ /dev/null @@ -1,34 +0,0 @@ ---- -title: Writing Style and Approach -navTitle: Style -description: Tips for choosing a consistent style when writing an article -tags: - - documentation - - contributing ---- - -This is a small collection of tips aimed at helping you to write effectively; use what you find helpful, and ignore what you don't. - -## Start with your own voice - -You're drawing from your own knowledge and experience when writing, so start by writing in your own words. You've probably explained the topic you're writing about to others before, so draw on that experience: what worked, and what did not. - -You will want to go back later and revise what you've written before submitting it, but initially the only person reading it is you - so make sure that at least *you* can understand what you've written! - -## Be prescriptive instead of descriptive - -Readers have work to get done, and need information on how to do it. When there are multiple options, they must spend time and energy trying to decide which option to take - the easier we make this for them, the more useful these articles become. So whenever possible, indicate both the preferred course of action and when / why other options might be relevant. **It is ok to be opinionated!** Empower readers to make their own decisions, but don't pretend all options are equally valid; if one approach will be best in most situations, *say that!* - -### Avoid awkward passive phrasing - -Most of us have spent a great deal of time reading existing reference documentation, and too often we mirror its style - even when we're not writing reference documentation. Reference docs strive to reflect facts and avoid opinions, even when those opinions are well-supported by facts - this can lead to stilted, hard-to-understand sentences. - -When something you've written appears awkward upon review, pretend that you're writing for a colleague whose work will affect your own: you may quickly find a more direct way to communicate. - -## Prefer gender-neutral language - -Most of the subjects we're writing about here have no gender, so this is usually pretty easy. Should you find it useful to include an example involving *people*, then prefer to use "they/them/their" as pronouns unless your example includes *real* people *and* you happen to know their genders. - -## Further reading - -- [Planning an article](/community/contribute/planning) diff --git a/advocacy_docs/security/advisories/cve202331043.mdx b/advocacy_docs/security/advisories/cve202331043.mdx index 1edaf3fd5c5..605ff6ed0e9 100644 --- a/advocacy_docs/security/advisories/cve202331043.mdx +++ b/advocacy_docs/security/advisories/cve202331043.mdx @@ -39,7 +39,7 @@ EDB Postgres Advanced Server (EPAS) | Product | VRMF | Remediation/First Fix | |---------|------|-----------------------| -| EPAS | All versions
up to 10.23.32 | Update to latest supported version
(at least [10.23.33](https://www.enterprisedb.com/docs/epas/10/epas_rel_notes/epas10_23_33_rel_notes/)) | +| EPAS | All versions
up to 10.23.32 | Update to latest supported version
(at least 10.23.33) | | EPAS | 11.1.7 to
11.18.28 | Update to latest supported version
(at least [11.18.29](https://www.enterprisedb.com/docs/epas/11/epas_rel_notes/epas11_18_29_rel_notes/)) | | EPAS | 12.1.2 to
12.13.16 | Update to latest supported version
(at least [12.13.17](https://www.enterprisedb.com/docs/epas/12/epas_rel_notes/epas12_13_17_rel_notes/)) | | EPAS | 13.1.4 to
13.9.12 | Update to latest supported version
(at least [13.9.13](https://www.enterprisedb.com/docs/epas/13/epas_rel_notes/epas13_9_13_rel_notes/)) | diff --git a/advocacy_docs/security/assessments/cve-2024-0985.mdx b/advocacy_docs/security/assessments/cve-2024-0985.mdx index ec0e16ab222..2a390d87bcb 100644 --- a/advocacy_docs/security/assessments/cve-2024-0985.mdx +++ b/advocacy_docs/security/assessments/cve-2024-0985.mdx @@ -61,8 +61,8 @@ CVSS Vector: AV:N/AC:L/PR:L/UI:R/S:U/C:H/I:H/A:H |---------|--------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------| | EPAS | All versions prior to 15.6.0 | Update to latest supported version
(at least [15.6.0](/epas/15/epas_rel_notes/epas15_6_0_rel_notes/)) and patch existing clusters. | | EPAS | All versions prior to 14.11.0 | Update to latest supported version
(at least [14.11.0](/epas/14/epas_rel_notes/epas14_11_0_rel_notes/)) and patch existing clusters. | -| EPAS | All versions prior to 13.14.20 | Update to latest supported version
(at least [13.14.20](/epas/13/epas_rel_notes/epas13_13_20_rel_notes/)) and patch existing clusters. | -| EPAS | All versions prior to 12.18.23 | Update to latest supported version
(at least [12.18.23](/epas/15/epas_rel_notes/epas12_18_23_rel_notes/)) and patch existing clusters. | +| EPAS | All versions prior to 13.14.20 | Update to latest supported version
(at least [13.14.20](/epas/13/epas_rel_notes/epas13_14_20_rel_notes/)) and patch existing clusters. | +| EPAS | All versions prior to 12.18.23 | Update to latest supported version
(at least [12.18.23](/epas/12/epas_rel_notes/epas12_18_23_rel_notes/)) and patch existing clusters. | ### PGE Version Information diff --git a/product_docs/docs/biganimal/release/administering_cluster/projects.mdx b/product_docs/docs/biganimal/release/administering_cluster/projects.mdx index fb5d528b9fa..3c9c740a8a0 100644 --- a/product_docs/docs/biganimal/release/administering_cluster/projects.mdx +++ b/product_docs/docs/biganimal/release/administering_cluster/projects.mdx @@ -23,7 +23,7 @@ To add a user: 4. Depending on the level of access you want for the user, select the appropriate role. 5. Select **Submit**. -You can enable in-app inbox or email notifications to get alerted when a user is invited to a project. For more information, see [managing notifications](notifications/#manage-notifications). +You can enable in-app inbox or email notifications to get alerted when a user is invited to a project. For more information, see [managing notifications](notifications/#managing-notifications). ## Creating a project diff --git a/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx b/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx index 28ebf644a96..6564504065d 100644 --- a/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx +++ b/product_docs/docs/biganimal/release/getting_started/creating_a_cluster/index.mdx @@ -235,7 +235,7 @@ Enable **Transparent Data Encryption (TDE)** to use your own encryption key. Thi !!!Note "Important" - To enable and use TDE for a cluster, you must first enable the encryption key and add it at the project level before creating a cluster. To add a key, see [Adding a TDE key at project level](../../administering_cluster/projects.mdx/#adding-a-tde-key). -- To enable and use TDE for a cluster, you must complete the configuration on the platform of your key management provider after creating a cluster. See [Completing the TDE configuration](#completing-the-TDE-configuration) for more information. +- To enable and use TDE for a cluster, you must complete the configuration on the platform of your key management provider after creating a cluster. See [Completing the TDE configuration](#completing-the-tde-configuration) for more information. !!! #### Completing the TDE configuration diff --git a/product_docs/docs/biganimal/release/getting_started/managing_cluster.mdx b/product_docs/docs/biganimal/release/getting_started/managing_cluster.mdx index 8afdf613c5b..c3b6f896b9d 100644 --- a/product_docs/docs/biganimal/release/getting_started/managing_cluster.mdx +++ b/product_docs/docs/biganimal/release/getting_started/managing_cluster.mdx @@ -16,9 +16,9 @@ While paused, clusters aren't upgraded or patched, but upgrades are applied when After seven days, single-node and high-availability clusters automatically resume. Resuming a cluster applies any pending maintenance upgrades. Monitoring begins again. -With CLI 3.7.0 and later, you can [pause and resume a cluster using the CLI](../../reference/cli/managing_clusters/#pausing-a-cluster). +With CLI 3.7.0 and later, you can [pause and resume a cluster using the CLI](../getting_started/managing_cluster/#pausing-and-resuming-clusters). -You can enable in-app inbox or email notifications to get alerted when the paused cluster is or will be reactivated. For more information, see [managing notifications](../administering_cluster/notifications/#manage-notifications). +You can enable in-app inbox or email notifications to get alerted when the paused cluster is or will be reactivated. For more information, see [managing notifications](../administering_cluster/notifications/#managing-notifications). ### Pausing a cluster diff --git a/product_docs/docs/biganimal/release/getting_started/overview.mdx b/product_docs/docs/biganimal/release/getting_started/overview.mdx index 20bd1a228d3..16e9f92ae4b 100644 --- a/product_docs/docs/biganimal/release/getting_started/overview.mdx +++ b/product_docs/docs/biganimal/release/getting_started/overview.mdx @@ -16,7 +16,7 @@ Use the following high-level steps to set up a BigAnimal account and begin using 1. Create an EDB account. For more information, see [Create an EDB account](../free_trial/detail/create_an_account/). After setting up the account, you can access all of the features and capabilities of the BigAnimal portal. -1. Create a cluster. When prompted for **Where to deploy**, select **BigAnimal**. See [Creating a cluster](../creating_a_cluster/). +1. Create a cluster. When prompted for **Where to deploy**, select **BigAnimal**. See [Creating a cluster](../getting_started/creating_a_cluster/). 1. Use your cluster. See [Using your cluster](../using_cluster/). @@ -73,7 +73,7 @@ Use the following high-level steps to connect BigAnimal to your own cloud accoun 1. Activate and manage regions. See [Managing regions](activating_regions/). -1. Create a cluster. When prompted for **Where to deploy**, select **Your Cloud Account**. See [Creating a cluster](../creating_a_cluster/). +1. Create a cluster. When prompted for **Where to deploy**, select **Your Cloud Account**. See [Creating a cluster](../getting_started/creating_a_cluster/). 1. Use your cluster. See [Using your cluster](../using_cluster/). diff --git a/product_docs/docs/biganimal/release/getting_started/preparing_cloud_account/preparing_gcp/index.mdx b/product_docs/docs/biganimal/release/getting_started/preparing_cloud_account/preparing_gcp/index.mdx index 28411b30faf..efa5073261c 100644 --- a/product_docs/docs/biganimal/release/getting_started/preparing_cloud_account/preparing_gcp/index.mdx +++ b/product_docs/docs/biganimal/release/getting_started/preparing_cloud_account/preparing_gcp/index.mdx @@ -14,7 +14,7 @@ Ensure you have at least the following combined roles: Alternatively, you can have an equivalent single role, such as: - roles/owner -BigAnimal requires you to check the readiness of your Google Cloud (GCP) account before you deploy your clusters. (You don't need to perform this check if you're using BigAnimal's cloud account as your [deployment option](../planning/deployment_options). The checks that you perform ensure that your Google Cloud account is prepared to meet your clusters' requirements and resource limits. +BigAnimal requires you to check the readiness of your Google Cloud (GCP) account before you deploy your clusters. (You don't need to perform this check if you're using BigAnimal's cloud account as your [deployment option](/biganimal/latest/planning/deployment_options/). The checks that you perform ensure that your Google Cloud account is prepared to meet your clusters' requirements and resource limits. ## Required APIs and services diff --git a/product_docs/docs/biganimal/release/overview/03_security/index.mdx b/product_docs/docs/biganimal/release/overview/03_security/index.mdx index d90580eb892..5aefd2e212f 100644 --- a/product_docs/docs/biganimal/release/overview/03_security/index.mdx +++ b/product_docs/docs/biganimal/release/overview/03_security/index.mdx @@ -51,7 +51,7 @@ This overview shows the supported cluster-to-key combinations. To enable TDE: -- Before you create a TDE-enabled cluster, you must [add a TDE key](../../administering_cluster/projects##adding-a-tde-key). +- Before you create a TDE-enabled cluster, you must [add a TDE key](../../administering_cluster/projects/#adding-a-tde-key). - See [Creating a new cluster - Security](../../getting_started/creating_a_cluster#security) to enable a TDE key during the cluster creation. diff --git a/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/01_connecting_from_azure/index.mdx b/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/01_connecting_from_azure/index.mdx index 2ba3ae950e1..8dbd24e4cbc 100644 --- a/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/01_connecting_from_azure/index.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/01_connecting_from_azure/index.mdx @@ -24,7 +24,7 @@ If you set up a private endpoint and want to change to a public network, you mus ### Using BigAnimal's cloud account -When using BigAnimal's cloud account, when creating a cluster, you provide BigAnimal with your Azure subscription ID (see [Networking](/biganimal/latest/getting_started/creating_a_cluster/#network-logs--telemetry-section)). BigAnimal, in turn, provides you with a private link alias, which you can use to connect to your cluster privately. +When using BigAnimal's cloud account, when creating a cluster, you provide BigAnimal with your Azure subscription ID (see [Networking](/biganimal/latest/getting_started/creating_a_cluster/#cluster-info-tab)). BigAnimal, in turn, provides you with a private link alias, which you can use to connect to your cluster privately. 1. When creating your cluster, on the **Cluster Settings** tab, in the **Network** section: 1. Select **Private**. diff --git a/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/02_connecting_from_aws/index.mdx b/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/02_connecting_from_aws/index.mdx index 0d6c7864b44..603bafbed66 100644 --- a/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/02_connecting_from_aws/index.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/02_connecting_from_aws/index.mdx @@ -15,7 +15,7 @@ The way you create a private endpoint differs when you're using your AWS account ## Using BigAnimal's cloud account -When using BigAnimal's cloud account, you provide BigAnimal with your AWS account ID when creating a cluster (see [Networking](/biganimal/latest/getting_started/creating_a_cluster/#network-logs--telemetry-section)). BigAnimal, in turn, provides you with an AWS service name, which you can use to connect to your cluster privately. +When using BigAnimal's cloud account, you provide BigAnimal with your AWS account ID when creating a cluster (see [Networking](/biganimal/latest/getting_started/creating_a_cluster/#cluster-info-tab)). BigAnimal, in turn, provides you with an AWS service name, which you can use to connect to your cluster privately. 1. When creating your cluster, on the **Cluster Settings** tab, in the **Network** section: 1. Select **Private**. diff --git a/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/connecting_from_gcp/index.mdx b/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/connecting_from_gcp/index.mdx index 14db3add6a8..b2536be3baa 100644 --- a/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/connecting_from_gcp/index.mdx +++ b/product_docs/docs/biganimal/release/using_cluster/02_connecting_your_cluster/connecting_from_gcp/index.mdx @@ -6,7 +6,7 @@ navTitle: From Google Cloud The way you create a private Google Cloud endpoint differs when you're using your Google Cloud account versus using BigAnimal's cloud account. ## Using BigAnimal's cloud account -When using BigAnimal's cloud account, when creating a cluster, you provide BigAnimal with your Google Cloud project ID (see [Networking](/biganimal/latest/getting_started/creating_a_cluster/#network-logs--telemetry-section)). BigAnimal, in turn, provides you with a Google Cloud service attachment, which you can use to connect to your cluster privately. +When using BigAnimal's cloud account, when creating a cluster, you provide BigAnimal with your Google Cloud project ID (see [Networking](/biganimal/latest/getting_started/creating_a_cluster/#cluster-info-tab)). BigAnimal, in turn, provides you with a Google Cloud service attachment, which you can use to connect to your cluster privately. 1. When creating your cluster, on the **Cluster Settings** tab, in the **Network** section: 1. Select **Private**. diff --git a/product_docs/docs/jdbc_connector/42.5.4.2/installing/windows.mdx b/product_docs/docs/jdbc_connector/42.5.4.2/installing/windows.mdx index 70b3d598c36..b865386a4c0 100644 --- a/product_docs/docs/jdbc_connector/42.5.4.2/installing/windows.mdx +++ b/product_docs/docs/jdbc_connector/42.5.4.2/installing/windows.mdx @@ -30,7 +30,7 @@ Proceed to [Using the graphical installer](#using-the-graphical-installer). ## Using Stack Builder or StackBuilder Plus -If you're using PostgreSQL, you can invoke the graphical installer with Stack Builder. See [Using Stack Builder](https://www.enterprisedb.com/docs/supported-open-source/postgresql/installing/03_using_stackbuilder/). +If you're using PostgreSQL, you can invoke the graphical installer with Stack Builder. See [Using Stack Builder](/supported-open-source/postgresql/installing/using_stackbuilder/). If you're using EDB Postgres Advanced Server, you can invoke the graphical installer with StackBuilder Plus. See [Using StackBuilder Plus](/epas/latest/installing/windows/installing_advanced_server_with_the_interactive_installer/using_stackbuilder_plus/). diff --git a/product_docs/docs/migration_portal/4/known_issues_notes.mdx b/product_docs/docs/migration_portal/4/known_issues_notes.mdx index c336582544a..0c9a97de4b3 100644 --- a/product_docs/docs/migration_portal/4/known_issues_notes.mdx +++ b/product_docs/docs/migration_portal/4/known_issues_notes.mdx @@ -240,7 +240,7 @@ While using the Oracle default case, you may experience a lower compatibility ra AI Copilot is a tool designed to assist you with issues that come up while migrating DDLs. While this tool can greatly aid in problem solving, it's important to understand that generative AI technology will sometimes generate inaccurate or irrelevant responses. -The accuracy and quality of recommended solutions is heavily influenced by your [prompt and query strategies](/03_mp_using_portal/ai_good_prompts/). +The accuracy and quality of recommended solutions is heavily influenced by your [prompt and query strategies](03_mp_using_portal/mp_ai_copilot/ai_good_prompts/). Before applying any suggested solutions in production environments, we strongly recommend testing the solutions in a controlled test environment diff --git a/product_docs/docs/migration_toolkit/55/installing/macos.mdx b/product_docs/docs/migration_toolkit/55/installing/macos.mdx index 3855befecd3..2a3114d8904 100644 --- a/product_docs/docs/migration_toolkit/55/installing/macos.mdx +++ b/product_docs/docs/migration_toolkit/55/installing/macos.mdx @@ -25,7 +25,7 @@ export JAVA_HOME=$(/usr/libexec/java_home) ## Using Stack Builder -If you are using PostgreSQL, you can invoke the graphical installer with Stack Builder. See [Using Stack Builder](https://www.enterprisedb.com/docs/supported-open-source/postgresql/installing/03_using_stackbuilder/). +If you are using PostgreSQL, you can invoke the graphical installer with Stack Builder. See [Using Stack Builder](/supported-open-source/postgresql/installing/using_stackbuilder/). 1. In Stack Builder, follow the prompts until you get to the module selection page. diff --git a/product_docs/docs/migration_toolkit/55/installing/windows.mdx b/product_docs/docs/migration_toolkit/55/installing/windows.mdx index 343847e5ce4..09dce11da79 100644 --- a/product_docs/docs/migration_toolkit/55/installing/windows.mdx +++ b/product_docs/docs/migration_toolkit/55/installing/windows.mdx @@ -37,7 +37,7 @@ Proceed to the [Using the graphical installer](#using-the-graphical-installer) s ## Using Stack Builder or StackBuilder Plus -If you are using PostgreSQL, you can invoke the graphical installer with Stack Builder. See [Using Stack Builder](https://www.enterprisedb.com/docs/supported-open-source/postgresql/installing/03_using_stackbuilder/). +If you are using PostgreSQL, you can invoke the graphical installer with Stack Builder. See [Using Stack Builder](/supported-open-source/postgresql/installing/using_stackbuilder/). 1. In Stack Builder, follow the prompts until you get to the module selection page. diff --git a/product_docs/docs/odbc_connector/13/installing/windows.mdx b/product_docs/docs/odbc_connector/13/installing/windows.mdx index 9721c0d8498..665a3e580f0 100644 --- a/product_docs/docs/odbc_connector/13/installing/windows.mdx +++ b/product_docs/docs/odbc_connector/13/installing/windows.mdx @@ -24,7 +24,7 @@ Proceed to [Using the graphical installer](#using-the-graphical-installer). ## Using Stack Builder or StackBuilder Plus -If you're using PostgreSQL, you can invoke the graphical installer with Stack Builder. See [Using Stack Builder](https://www.enterprisedb.com/docs/supported-open-source/postgresql/installing/03_using_stackbuilder/). +If you're using PostgreSQL, you can invoke the graphical installer with Stack Builder. See [Using Stack Builder](/supported-open-source/postgresql/installing/using_stackbuilder/). If you're using EDB Postgres Advanced Server, you can invoke the graphical installer with StackBuilder Plus. See [Using StackBuilder Plus](/epas/latest/installing/windows/installing_advanced_server_with_the_interactive_installer/using_stackbuilder_plus/). diff --git a/product_docs/docs/odbc_connector/16/installing/windows.mdx b/product_docs/docs/odbc_connector/16/installing/windows.mdx index 9721c0d8498..665a3e580f0 100644 --- a/product_docs/docs/odbc_connector/16/installing/windows.mdx +++ b/product_docs/docs/odbc_connector/16/installing/windows.mdx @@ -24,7 +24,7 @@ Proceed to [Using the graphical installer](#using-the-graphical-installer). ## Using Stack Builder or StackBuilder Plus -If you're using PostgreSQL, you can invoke the graphical installer with Stack Builder. See [Using Stack Builder](https://www.enterprisedb.com/docs/supported-open-source/postgresql/installing/03_using_stackbuilder/). +If you're using PostgreSQL, you can invoke the graphical installer with Stack Builder. See [Using Stack Builder](/supported-open-source/postgresql/installing/using_stackbuilder/). If you're using EDB Postgres Advanced Server, you can invoke the graphical installer with StackBuilder Plus. See [Using StackBuilder Plus](/epas/latest/installing/windows/installing_advanced_server_with_the_interactive_installer/using_stackbuilder_plus/). diff --git a/product_docs/docs/pem/8/considerations/setup_ha_using_efm.mdx b/product_docs/docs/pem/8/considerations/setup_ha_using_efm.mdx index 0f4c49a7ed3..1adc48f540f 100644 --- a/product_docs/docs/pem/8/considerations/setup_ha_using_efm.mdx +++ b/product_docs/docs/pem/8/considerations/setup_ha_using_efm.mdx @@ -9,7 +9,7 @@ redirects: Failover Manager is a high-availability tool from EDB that enables a Postgres primary node to failover to a standby node during a software or hardware failure on the primary. You can set up Failover Manager only with a fresh installation of a Postgres Enterprise Manager (PEM) server. You can't set it for an existing PEM installation. - + The examples in the following sections use these IP addresses: - 172.16.161.200 - PEM Primary @@ -28,9 +28,9 @@ The following must use the VIP address: 1. Install the following on the primary and one or more standbys: - - [EDB Postgres Advanced Server](/epas/8/epas_inst_linux/) (backend database for PEM Server) + - [EDB Postgres Advanced Server](/epas/latest/installing/) (backend database for PEM Server) - [PEM server](/pem/8/installing/) - - [EDB Failover Manager 4.1](/efm/8/efm_user/03_installing_efm/) + - [EDB Failover Manager 4.1](/efm/latest/installing/) Refer to the installation instructions in the product documentation using these links or see the instructions on the [EDB repos website](https://repos.enterprisedb.com). Replace `USERNAME:PASSWORD` with your username and password in the instructions to access the EDB repositories. @@ -43,6 +43,7 @@ The following must use the VIP address: ```shell /usr/edb/pem/bin/configure-pem-server.sh -t 1 ``` + For more detail on configuration types see, [Configuring the PEM server on Linux](/pem/8/installing/configuring_the_pem_server_on_linux/). 3. Add the following ports in the firewall on the primary and all the standby servers to allow the access: @@ -90,6 +91,7 @@ The following must use the VIP address: For more information on configuring parameters for streaming replication, see the [PostgreSQL documentation](https://www.postgresql.org/docs/13/warm-standby.html#STREAMING-REPLICATION). !!! Note + The configuration parameters might differ for different versions of the database server. You can email EDB Support at [techsupport@enterprisedb.com](mailto:techsupport@enterprisedb.com) for help with setting up these parameters. 3. Add the following entry in the host-based authentication (`/var/lib/edb/as13/data/pg_hba.conf`) file to allow the replication user to connect from all the standbys: @@ -99,6 +101,7 @@ The following must use the VIP address: ``` !!! Note + You can change the cidr range of the IP address, if needed. 4. Modify the host-based authentication (`/var/lib/edb/as13/data/pg_hba.conf`) file for the pem_user role to connect to all databases using the scram-sha-256 authentication method: @@ -128,6 +131,7 @@ The following must use the VIP address: ``` !!! Note + This example uses the pg_basebackup utility to create the replicas of the PEM backend database server on the standby servers. When using pg_basebackup, you need to stop the existing database server and remove the existing data directories. 2. Remove the data directory of the database server on all the standby nodes: @@ -222,6 +226,7 @@ For example: This code ensures that the webserver is configured on the standby and is disabled by default. Switchover by EFM enables the webserver. !!! Note + Manually keep the certificates in sync on master and standbys whenever the certificates are updated. 8. Run the `configure-selinux.sh` script to configure the SELinux policy for PEM: @@ -239,7 +244,7 @@ This code ensures that the webserver is configured on the standby and is disable $ sudo chmod 640 /root/.pem/agent1.crt ``` -9. Disable and stop HTTPD and PEM agent services if they're running on all replica nodes: +9. Disable and stop HTTPD and PEM agent services if they're running on all replica nodes: ```shell systemctl stop pemagent @@ -249,8 +254,8 @@ systemctl disable httpd ``` !!! Note - At this point, a PEM primary server and two standbys are ready to take over from the primary whenever needed. + At this point, a PEM primary server and two standbys are ready to take over from the primary whenever needed. ## Set up EFM to manage failover on all hosts @@ -260,7 +265,7 @@ systemctl disable httpd - Grant the execute privileges on the functions related to WAL logs and the monitoring privileges to the user. - Add entries in `pg_hba.conf` to allow the efm database user to connect to the database server from all nodes on all the hosts. - Reload the configurations on all the database servers. - + For example: ```sql @@ -502,6 +507,7 @@ In case of failover, any of the standbys are promoted as the primary node, and P ## Current limitations The current limitations include: + - Web console sessions for the users are lost during the switchover. - Per-user settings set from the Preferences dialog box are lost, as they’re stored in local configuration files on the file system. - Background processes, started by the Backup, Restore, and Maintenance dialogs boxes, and their logs aren't shared between the systems. They are lost during switchover. diff --git a/product_docs/docs/pem/8/installing/windows/index.mdx b/product_docs/docs/pem/8/installing/windows/index.mdx index 215a868f468..b753bcf2ba9 100644 --- a/product_docs/docs/pem/8/installing/windows/index.mdx +++ b/product_docs/docs/pem/8/installing/windows/index.mdx @@ -35,7 +35,7 @@ The PEM server backend database can be an EDB distribution of the PostgreSQL or - For detailed information about installing and configuring a standalone PEM agent, see [Installing the PEM agent on Windows](../../installing_pem_agent/windows_agent). -- Language pack installers contain supported languages that you can use with EDB Postgres Advanced Server and EDB PostgreSQL database installers. The language pack installer allows you to install Perl, TCL/TK, and Python without installing supporting software from third-party vendors. For more information about installing and using the language pack, see [EDB Postgres language pack](/epas/8/language_pack/). +- Language pack installers contain supported languages that you can use with EDB Postgres Advanced Server and EDB PostgreSQL database installers. The language pack installer allows you to install Perl, TCL/TK, and Python without installing supporting software from third-party vendors. For more information about installing and using the language pack, see [EDB Postgres language pack](/language_pack/latest/). - For troubleshooting the installation or configuration of the PEM agent, see [Troubleshooting PEM agent](../../troubleshooting_agent/). diff --git a/product_docs/docs/pem/8/pem_online_help/01_toc_pem_getting_started/08_pem_define_aws_instance_connection.mdx b/product_docs/docs/pem/8/pem_online_help/01_toc_pem_getting_started/08_pem_define_aws_instance_connection.mdx index 5c4326c7f2e..d8821312612 100644 --- a/product_docs/docs/pem/8/pem_online_help/01_toc_pem_getting_started/08_pem_define_aws_instance_connection.mdx +++ b/product_docs/docs/pem/8/pem_online_help/01_toc_pem_getting_started/08_pem_define_aws_instance_connection.mdx @@ -49,7 +49,7 @@ As the PEM Agent will be monitoring the Postgres(RDS) AWS instance remotely, the | Manage Alerts | Limited | When you run an alert script on the database server, it will run on the machine where the bound PEM Agent is running, and not on the actual database server machine. | | Manage Charts | Yes | | | Manage Dashboards | Limited | Some dashboards may not be able to show complete data. For example, the operating system information of the database server will not be displayed as it is not available. | -| Manage Probes | Limited | Some of the PEM probes will not return information, and some of the functionalities may be affected. For details about probe functionality, see [Agent privileges ](/pem/latest/managing_pem_agent/#agent-privileges). | +| Manage Probes | Limited | Some of the PEM probes will not return information, and some of the functionalities may be affected. For details about probe functionality, see [Agent privileges ](/pem/8/managing_pem_agent/#agent-privileges). | | Postgres Expert | Limited | The Postgres Expert will provide partial information as operating system information is not available. | | Postgres Log Analysis Expert | No | The Postgres Log Analysis Expert will not be able to perform an analysis as it is dependent on the logs imported by log manager, which will not work as required. | | Scheduled Tasks | Limited | Scheduled tasks will work only for database server; scripts will run on a remote Agent. | diff --git a/product_docs/docs/pem/8/pem_online_help/02_toc_pem_agent/05_pem_agent_privileges.mdx b/product_docs/docs/pem/8/pem_online_help/02_toc_pem_agent/05_pem_agent_privileges.mdx index 8a1b8644834..449c01e4047 100644 --- a/product_docs/docs/pem/8/pem_online_help/02_toc_pem_agent/05_pem_agent_privileges.mdx +++ b/product_docs/docs/pem/8/pem_online_help/02_toc_pem_agent/05_pem_agent_privileges.mdx @@ -23,7 +23,7 @@ Please note that PEM functionality diminishes as the privileges of the PEM agent | Manage Alerts | yes | yes | yes

NOTE: When run alert script on the database server is selected, it will run on the machine, where bound PEM Agent is running, and not on the actual database server machine.
| | Manage Charts | yes | yes | yes | | Manage Dashboards | yes | Some dashboards may not be able to show complete data. For example, columns such as swap usage, CPU usage, IO read, and IO write will be displayed as 0 in the session activity dashboard. | Some dashboards may not be able to show complete data. For example, the operating system information of the database server will not be displayed as not available. | -| Manage Probes | yes | Some of the PEM probes will not return information, and some of functionalities may be affected. For details about probe functionality, see the [Agent privileges](/pem/latest/managing_pem_agent/#agent-privileges). | Some of the PEM probes will not return information, and some of the functionalities may be affected. | +| Manage Probes | yes | Some of the PEM probes will not return information, and some of functionalities may be affected. For details about probe functionality, see the [Agent privileges](/pem/8/managing_pem_agent/#agent-privileges). | Some of the PEM probes will not return information, and some of the functionalities may be affected. | | Postgres Expert | yes | The Postgres Expert will be able to access the configuration expert and schema expert, but not the security expert. | The Expert will provide partial information as operating system information is not available. | | Postgres Log Analysis Expert | yes | The Postgres Log Analysis Expert may not be able to do the analysis as it is dependent on the logs imported by log manager, which will not work as required. | The Postgres Log Analysis Expert will not be able to do the analysis as it is dependent on the logs imported by log manager, which will not work as required. | | Scheduled Tasks | yes | For Linux if user is the same as batch_script_user in agent.cfg then shell script will run. | Scheduled tasks will work only for database server; scripts will run on a remote Agent. | diff --git a/product_docs/docs/pem/8/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx b/product_docs/docs/pem/8/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx index 3d6358caa53..bc8632a4be4 100644 --- a/product_docs/docs/pem/8/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx +++ b/product_docs/docs/pem/8/pem_online_help/04_toc_pem_features/21_performance_diagnostic.mdx @@ -15,7 +15,7 @@ Peformance Diagnostic feature is supported for Advanced Server databases from PE For PostgreSQL databases, Performance Diagnostics is supported only for versions 10, 11, 12, and 13 installed on supported platforms. -For more information on EDB Wait States, see [EDB wait states docs](/epas/latest/managing_performance/evaluating_wait_states/#edb-wait-states). +For more information on EDB Wait States, see [EDB wait states docs](/pg_extensions/wait_states/). You can analyze the Wait States data on multiple levels by narrowing down your selection of data. Each level of the graph is populated on the basis of your selection of data at the higher level. diff --git a/product_docs/docs/pem/8/tuning_performance/performance_diagnostic.mdx b/product_docs/docs/pem/8/tuning_performance/performance_diagnostic.mdx index 2f9566bb258..397415fc3ec 100644 --- a/product_docs/docs/pem/8/tuning_performance/performance_diagnostic.mdx +++ b/product_docs/docs/pem/8/tuning_performance/performance_diagnostic.mdx @@ -16,7 +16,7 @@ The Performance Diagnostic dashboard analyzes the database performance for Postg - For EDB Postgres Extended databases, Performance Diagnostic is supported for version 11 or later on the supported RHEL platforms. -For more information on EDB wait states, see [EDB wait states](/epas/latest/managing_performance/evaluating_wait_states/#edb-wait-states). +For more information on EDB wait states, see [EDB wait states](/pg_extensions/wait_states/). To analyze the Wait States data on multiple levels, narrow down the data you select. The data you select at the higher level of the graph populates the lower level. diff --git a/product_docs/docs/pgbouncer/1/installing/windows.mdx b/product_docs/docs/pgbouncer/1/installing/windows.mdx index be878ab0f76..2f772f7a562 100644 --- a/product_docs/docs/pgbouncer/1/installing/windows.mdx +++ b/product_docs/docs/pgbouncer/1/installing/windows.mdx @@ -27,7 +27,7 @@ Proceed to [Using the graphical installer](#using-the-graphical-installer). ## Using Stack Builder or StackBuilder Plus -If you're using PostgreSQL, you can invoke the graphical installer with Stack Builder. See [Using Stack Builder](https://www.enterprisedb.com/docs/supported-open-source/postgresql/installing/03_using_stackbuilder/). +If you're using PostgreSQL, you can invoke the graphical installer with Stack Builder. See [Using Stack Builder](/supported-open-source/postgresql/installing/using_stackbuilder/). If you're using EDB Postgres Advanced Server, you can invoke the graphical installer with StackBuilder Plus. See [Using StackBuilder Plus](/epas/latest/installing/windows/installing_advanced_server_with_the_interactive_installer/using_stackbuilder_plus/). diff --git a/product_docs/docs/pgd/3.7/harp/03_installation.mdx b/product_docs/docs/pgd/3.7/harp/03_installation.mdx index ef9ff6a3bcb..dbdeba3ed72 100644 --- a/product_docs/docs/pgd/3.7/harp/03_installation.mdx +++ b/product_docs/docs/pgd/3.7/harp/03_installation.mdx @@ -65,7 +65,7 @@ considerations. Currently CentOS/RHEL packages are provided by the EDB packaging infrastructure. For details, see the [HARP product -page](https://www.enterprisedb.com/docs/harp/latest/). +page](./). ### etcd packages diff --git a/product_docs/docs/pgd/4/bdr/catalogs.mdx b/product_docs/docs/pgd/4/bdr/catalogs.mdx index 55a2158bc69..0daf5ed0474 100644 --- a/product_docs/docs/pgd/4/bdr/catalogs.mdx +++ b/product_docs/docs/pgd/4/bdr/catalogs.mdx @@ -787,10 +787,10 @@ is set correctly when the wait relates to BDR. ### `bdr.stat_relation` Shows apply statistics for each relation. Contains data only if tracking is enabled with -[`bdr.track_relation_apply`](configuration.mdx#bdrtrack_relation_apply) +[`bdr.track_relation_apply`](configuration.mdx#monitoring-and-logging) and if data was replicated for a given relation. -`lock_acquire_time` is updated only if [`bdr.track_apply_lock_timing`](configuration.mdx#bdrtrack_apply_lock_timing) +`lock_acquire_time` is updated only if [`bdr.track_apply_lock_timing`](configuration.mdx#monitoring-and-logging) is set to `on` (default: `off`). You can reset the stored relation statistics by calling @@ -819,10 +819,10 @@ You can reset the stored relation statistics by calling ### `bdr.stat_subscription` Shows apply statistics for each subscription. Contains data only if tracking is enabled with -[`bdr.track_subscription_apply`](configuration.mdx#bdrtrack_subscription_apply). +[`bdr.track_subscription_apply`](configuration.mdx#monitoring-and-logging). You can reset the stored subscription statistics by calling -[`bdr.reset_subscription_stats()`](functions.mdx#bdrreset_subscripion_stats). +[`bdr.reset_subscription_stats()`](functions.mdx#bdrreset_subscription_stats). #### `bdr.stat_subscription` columns diff --git a/product_docs/docs/pgd/4/bdr/functions.mdx b/product_docs/docs/pgd/4/bdr/functions.mdx index 5191cbc045a..57ce16a86cf 100644 --- a/product_docs/docs/pgd/4/bdr/functions.mdx +++ b/product_docs/docs/pgd/4/bdr/functions.mdx @@ -51,7 +51,7 @@ When you initialize a session, this is set to the node id the client is connected to. This allows an application to figure out the node it's connected to, even behind a transparent proxy. -It's also used with [Connection pools and proxies](CAMO#connection-pools-and-proxies). +It's also used with [Connection pools and proxies](camo#connection-pools-and-proxies). ### bdr.last_committed_lsn diff --git a/product_docs/docs/pgd/4/deployments/index.mdx b/product_docs/docs/pgd/4/deployments/index.mdx index 3f85b9230f5..acd775c9cf0 100644 --- a/product_docs/docs/pgd/4/deployments/index.mdx +++ b/product_docs/docs/pgd/4/deployments/index.mdx @@ -8,7 +8,7 @@ navigation: You can deploy and install EDB Postgres Distributed products using the following methods: -- TPAexec is an orchestration tool that uses Ansible to build Postgres clusters as specified by TPA (Trusted Postgres Architecture), a set of reference architectures that document how to set up and operate Postgres in various scenarios. TPA represents the best practices followed by EDB, and its recommendations are as applicable to quick testbed setups as to production environments. To deploy PGD using TPA, see the [TPA documentation](/admin-tpa/installing/). +- TPAexec is an orchestration tool that uses Ansible to build Postgres clusters as specified by TPA (Trusted Postgres Architecture), a set of reference architectures that document how to set up and operate Postgres in various scenarios. TPA represents the best practices followed by EDB, and its recommendations are as applicable to quick testbed setups as to production environments. To deploy PGD using TPA, see the [TPA documentation](tpaexec/installing_tpaexec/). - Manual installation is also available where TPA is not an option. Details of how to deploy PGD manually are in the [manual installation](/pgd/4/deployments/manually/) section of the documentation. diff --git a/product_docs/docs/pgd/4/harp/03_installation.mdx b/product_docs/docs/pgd/4/harp/03_installation.mdx index 5d566c7fbd9..663ec7ecb66 100644 --- a/product_docs/docs/pgd/4/harp/03_installation.mdx +++ b/product_docs/docs/pgd/4/harp/03_installation.mdx @@ -67,7 +67,7 @@ considerations. Currently CentOS/RHEL packages are provided by the EDB packaging infrastructure. For details, see the [HARP product -page](https://www.enterprisedb.com/docs/harp/latest/). +page](./). ### etcd packages diff --git a/product_docs/docs/pgd/5/appusage/table-access-methods.mdx b/product_docs/docs/pgd/5/appusage/table-access-methods.mdx index 7cbb8b1c6e4..ac7f124647b 100644 --- a/product_docs/docs/pgd/5/appusage/table-access-methods.mdx +++ b/product_docs/docs/pgd/5/appusage/table-access-methods.mdx @@ -31,4 +31,4 @@ replicate to all PGD nodes in the cluster. For more information on these table access methods, see: - [Autocluster example](/pg_extensions/advanced_storage_pack/using/#autocluster-example) -- [Refdata example](pg_extensions/advanced_storage_pack/using/#refdata-example) +- [Refdata example](/pg_extensions/advanced_storage_pack/using/#refdata-example) diff --git a/product_docs/docs/pgd/5/consistency/column-level-conflicts.mdx b/product_docs/docs/pgd/5/consistency/column-level-conflicts.mdx index 9bdb11b6922..6fe4df7f3f0 100644 --- a/product_docs/docs/pgd/5/consistency/column-level-conflicts.mdx +++ b/product_docs/docs/pgd/5/consistency/column-level-conflicts.mdx @@ -31,7 +31,7 @@ Applied to the previous example, the result is `(100,100)` on both nodes, despit When thinking about column-level conflict resolution, it can be useful to see tables as vertically partitioned, so that each update affects data in only one slice. This approach eliminates conflicts between changes to different subsets of columns. In fact, vertical partitioning can even be a practical alternative to column-level conflict resolution. -Column-level conflict resolution requires the table to have `REPLICA IDENTITY FULL`. The [bdr.alter_table_conflict_detection()](conflict_functions#bdralter_table_conflict_detection) function checks that and fails with an error if this setting is missing. +Column-level conflict resolution requires the table to have `REPLICA IDENTITY FULL`. The [bdr.alter_table_conflict_detection()](../reference/conflict_functions#bdralter_table_conflict_detection) function checks that and fails with an error if this setting is missing. ## Enabling and disabling column-level conflict resolution @@ -39,7 +39,7 @@ Column-level conflict resolution requires the table to have `REPLICA IDENTITY FU Column-level conflict detection uses the `column_timestamps` type. This type requires any user needing to detect column-level conflicts to have at least the [bdr_application](../security/pgd-predefined-roles/#bdr_application) role assigned. !!! -The [bdr.alter_table_conflict_detection()](conflict_functions#bdralter_table_conflict_detection) function manages column-level conflict resolution. +The [bdr.alter_table_conflict_detection()](../reference/conflict_functions/#bdralter_table_conflict_detection) function manages column-level conflict resolution. ### Example diff --git a/product_docs/docs/pgd/5/ddl/ddl-pgd-functions-like-ddl.mdx b/product_docs/docs/pgd/5/ddl/ddl-pgd-functions-like-ddl.mdx index db4ecceb255..0f9aa5d00e3 100644 --- a/product_docs/docs/pgd/5/ddl/ddl-pgd-functions-like-ddl.mdx +++ b/product_docs/docs/pgd/5/ddl/ddl-pgd-functions-like-ddl.mdx @@ -20,7 +20,7 @@ Replication set management: Conflict management: -- [`bdr.alter_table_conflict_detection`](/pgd/latest/consistency/conflict_functions#bdralter_table_conflict_detection) +- [`bdr.alter_table_conflict_detection`](../reference/conflict_functions/#bdralter_table_conflict_detection) - `bdr.column_timestamps_enable` (deprecated; use `bdr.alter_table_conflict_detection()`) - `bdr.column_timestamps_disable` (deprecated; use `bdr.alter_table_conflict_detection()`) diff --git a/product_docs/docs/pgd/5/planning/choosing_server.mdx b/product_docs/docs/pgd/5/planning/choosing_server.mdx index 1b8375178a2..d916c81bbc6 100644 --- a/product_docs/docs/pgd/5/planning/choosing_server.mdx +++ b/product_docs/docs/pgd/5/planning/choosing_server.mdx @@ -32,7 +32,7 @@ The following table lists features of EDB Postgres Distributed that are dependen | [Lag Control](/pgd/latest/durability/lag-control/) | N | Y | 14+ | | [Decoding Worker](/pgd/latest/node_management/decoding_worker) | N | 13+ | 14+ | | [Lag tracker](/pgd/latest/monitoring/sql/#monitoring-outgoing-replication) | N | Y | 14+ | -| [Missing partition conflict](/pgd/latest/consistency/conflicts/#target_table_note) | N | Y | 14+ | -| [No need for UPDATE Trigger on tables with TOAST](/pgd/latest/consistency/conflicts/#toast-support-details) | N | Y | 14+ | -| [Automatically hold back FREEZE](/pgd/latest/consistency/conflicts/#origin-conflict-detection) | N | Y | 14+ | +| [Missing partition conflict](../reference/conflicts/#target_table_note) | N | Y | 14+ | +| [No need for UPDATE Trigger on tables with TOAST](../consistency/conflicts/02_types_of_conflict/#toast-support-details) | N | Y | 14+ | +| [Automatically hold back FREEZE](../consistency/conflicts/03_conflict_detection/#origin-conflict-detection) | N | Y | 14+ | | [Transparent Data Encryption](/tde/latest/) | N | 15+ | 15+ | diff --git a/product_docs/docs/pgd/5/quickstart/further_explore_conflicts.mdx b/product_docs/docs/pgd/5/quickstart/further_explore_conflicts.mdx index 2a864590cb0..aaae02e775b 100644 --- a/product_docs/docs/pgd/5/quickstart/further_explore_conflicts.mdx +++ b/product_docs/docs/pgd/5/quickstart/further_explore_conflicts.mdx @@ -132,7 +132,7 @@ You'll see that both commits are working. However, in the bottom-right pane, you ![4 Sessions showing conflict detected](images/4sessionsinsertconflict.png) -A row in the conflict history now notes a conflict in the table where the `insert_exists`. It also notes that the resolution for this conflict is that the newer record, based on the timing of the commit, is retained. This conflict is called an INSERT/INSERT conflict. You can read more about this type of conflict in [INSERT/INSERT conflicts](../consistency/conflicts/#insertinsert-conflicts). +A row in the conflict history now notes a conflict in the table where the `insert_exists`. It also notes that the resolution for this conflict is that the newer record, based on the timing of the commit, is retained. This conflict is called an INSERT/INSERT conflict. You can read more about this type of conflict in [INSERT/INSERT conflicts](../consistency/conflicts/02_types_of_conflict/#insertinsert-conflicts). ## Creating an update conflict @@ -161,7 +161,7 @@ Again you'll see both commits working. And, again, in the bottom-right pane, you ![4 Sessions showing update conflict detected](images/4sessionsupdateconflict.png) -An additional row in the conflict history shows an `update_origin_change` conflict occurred and that the resolution was `apply_remote`. This resolution means that the remote change was applied, updating the record. This conflict is called an UPDATE/UPDATE conflict and is explained in more detail in [UPDATE/UPDATE conflicts](../consistency/conflicts/#updateupdate-conflicts). +An additional row in the conflict history shows an `update_origin_change` conflict occurred and that the resolution was `apply_remote`. This resolution means that the remote change was applied, updating the record. This conflict is called an UPDATE/UPDATE conflict and is explained in more detail in [UPDATE/UPDATE conflicts](../consistency/conflicts/02_types_of_conflict/#updateupdate-conflicts). !!!Tip Exiting tmux You can quickly exit tmux and all the associated sessions. First terminate any running processes, as they will otherwise continue running after the session is killed. Press **Control-b** and then enter `:kill-session`. This approach is simpler than quitting each pane's session one at a time using **Control-D** or `exit`. diff --git a/product_docs/docs/pgd/5/quickstart/quick_start_linux.mdx b/product_docs/docs/pgd/5/quickstart/quick_start_linux.mdx index 363c0c2f81e..1e5fbe0f393 100644 --- a/product_docs/docs/pgd/5/quickstart/quick_start_linux.mdx +++ b/product_docs/docs/pgd/5/quickstart/quick_start_linux.mdx @@ -132,7 +132,7 @@ tpaexec configure democluster \ --hostnames-unsorted ``` -You specify the PGD-Always-ON architecture (`--architecture PGD-Always-ON`), which sets up the configuration for [PGD's Always-on architectures](../planning/architectures/). As part of the default architecture, it configures your cluster with three data nodes, cohosting three [PGD Proxy](../routing/proxy/) servers and a [Barman](/backup/#physical-backup) node for backup. +You specify the PGD-Always-ON architecture (`--architecture PGD-Always-ON`), which sets up the configuration for [PGD's Always-on architectures](../planning/architectures/). As part of the default architecture, it configures your cluster with three data nodes, cohosting three [PGD Proxy](../routing/proxy/) servers and a [Barman](../backup/#physical-backup) node for backup. For Linux hosts, specify that you're targeting a "bare" platform (`--platform bare`). TPA determines the Linux version running on each host during deployment. See [the EDB Postgres Distributed compatibility table](https://www.enterprisedb.com/resources/platform-compatibility) for details about the supported operating systems. diff --git a/product_docs/docs/pgd/5/reference/conflict_functions.mdx b/product_docs/docs/pgd/5/reference/conflict_functions.mdx index 7d434446382..a8e0e1e4d5d 100644 --- a/product_docs/docs/pgd/5/reference/conflict_functions.mdx +++ b/product_docs/docs/pgd/5/reference/conflict_functions.mdx @@ -61,8 +61,8 @@ bdr.alter_node_set_conflict_resolver(node_name text, ### Parameters - `node_name` — Name of the node that's being changed. -- `conflict_type` — Conflict type for which to apply the setting (see [List of conflict types](#list-of-conflict-types)). -- `conflict_resolver` — Resolver to use for the given conflict type (see [List of conflict resolvers](#list-of-conflict-resolvers)). +- `conflict_type` — Conflict type for which to apply the setting (see [List of conflict types](conflicts/#list-of-conflict-types)). +- `conflict_resolver` — Resolver to use for the given conflict type (see [List of conflict resolvers](conflicts/#list-of-conflict-resolvers)). ### Notes diff --git a/product_docs/docs/pgd/5/reference/pgd-settings.mdx b/product_docs/docs/pgd/5/reference/pgd-settings.mdx index e104b7165a8..a6927e2afc6 100644 --- a/product_docs/docs/pgd/5/reference/pgd-settings.mdx +++ b/product_docs/docs/pgd/5/reference/pgd-settings.mdx @@ -15,7 +15,7 @@ you can set the values at any time. Sets the default conflict detection method for newly created tables. Accepts same values as -[bdr.alter_table_conflict_detection()](../consistency/conflicts#bdralter_table_conflict_detection). +[bdr.alter_table_conflict_detection()](conflict_functions/#bdralter_table_conflict_detection). ## Global sequence parameters diff --git a/product_docs/docs/pgd/5/reference/streamtriggers/rowfunctions.mdx b/product_docs/docs/pgd/5/reference/streamtriggers/rowfunctions.mdx index 465cb1dc54e..77637ec7d08 100644 --- a/product_docs/docs/pgd/5/reference/streamtriggers/rowfunctions.mdx +++ b/product_docs/docs/pgd/5/reference/streamtriggers/rowfunctions.mdx @@ -77,7 +77,7 @@ bdr.trigger_get_type() This function returns the current conflict type if called inside a conflict trigger. Otherwise, returns `NULL`. -See [Conflict types](../../consistency/conflicts#list-of-conflict-types) +See [Conflict types](../../consistency/conflicts/02_types_of_conflict/) for possible return values of this function. #### Synopsis diff --git a/product_docs/docs/pgd/5/routing/installing_proxy.mdx b/product_docs/docs/pgd/5/routing/installing_proxy.mdx index 1d9a4da20d8..7af902d0e07 100644 --- a/product_docs/docs/pgd/5/routing/installing_proxy.mdx +++ b/product_docs/docs/pgd/5/routing/installing_proxy.mdx @@ -55,7 +55,7 @@ PGD Proxy uses endpoints given in the local config file only at proxy startup. A ##### Configuring health check -PGD Proxy provides [HTTP(S) health check APIs](../monitoring/#proxy-health-check). If the health checks are required, you can enable them by adding the following configuration parameters to the pgd-proxy configuration file. By default, it's disabled. +PGD Proxy provides [HTTP(S) health check APIs](monitoring/#proxy-health-check). If the health checks are required, you can enable them by adding the following configuration parameters to the pgd-proxy configuration file. By default, it's disabled. ```yaml cluster: diff --git a/product_docs/docs/pgd/5/security/roles.mdx b/product_docs/docs/pgd/5/security/roles.mdx index b387a5dd67d..cfe7b974c57 100644 --- a/product_docs/docs/pgd/5/security/roles.mdx +++ b/product_docs/docs/pgd/5/security/roles.mdx @@ -25,7 +25,7 @@ one has. Managing PGD doesn't require that administrators have access to user data. Arrangements for securing information about conflicts are discussed in -[Logging conflicts to a table](../consistency/conflicts#conflict-logging). +[Logging conflicts to a table](../reference/conflict_functions/#logging-conflicts-to-a-table). You can monitor conflicts using the [`bdr.conflict_history_summary`](/pgd/latest/reference/catalogs-visible#bdrconflict_history_summary) view. diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/architecture.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/architecture.mdx index e2c52bc4c89..e6c0615cbd1 100644 --- a/product_docs/docs/postgres_distributed_for_kubernetes/1/architecture.mdx +++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/architecture.mdx @@ -62,7 +62,7 @@ EDB Postgres Distributed for Kubernetes manages the following: - Data nodes. A node is a database and is managed by EDB Postgres for Kubernetes, creating a `Cluster` with a single instance. -- [Witness nodes](/pgd/latest/node_management/#witness-nodes) +- [Witness nodes](/pgd/latest/node_management/witness_nodes/) are basic database instances that don't participate in data replication. Their function is to guarantee that consensus is possible in groups with an even number of data nodes or after network partitions. Witness @@ -126,7 +126,7 @@ To function in Kubernetes, containers are provided for each Postgres distribution. These are the *operands*. In addition, the operator images are kept in those same repositories. -See [EDB private image registries](private_registries.md) +See [EDB private image registries](identify_images/private_registries.md) for details on accessing the images. ### Kubernetes architecture diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/backup.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/backup.mdx index cdc0be4c0e5..f68d0c7ab8a 100644 --- a/product_docs/docs/postgres_distributed_for_kubernetes/1/backup.mdx +++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/backup.mdx @@ -13,7 +13,7 @@ point-in-time recovery (PITR) is available. Multiple object stores are supported, such as AWS S3, Microsoft Azure Blob Storage, Google Cloud Storage, MinIO Gateway, or any S3-compatible provider. Given that EDB Postgres Distributed for Kubernetes configures the connection with object stores by relying on -EDB Postgres for Kubernetes, see the [EDB Postgres for Kubernetes cloud provider support](/postgres_for_kubernetes/latest/backup_recovery/#cloud-provider-support) +EDB Postgres for Kubernetes, see the [EDB Postgres for Kubernetes common object stores for backups](/postgres_for_kubernetes/latest/object_stores/) documentation for more information. !!! Important @@ -47,7 +47,7 @@ spec: maxParallel: 8 ``` -For more information, see the [EDB Postgres for Kubernetes WAL archiving](/postgres_for_kubernetes/latest/backup_recovery/#wal-archiving) documentation. +For more information, see the [EDB Postgres for Kubernetes WAL archiving](/postgres_for_kubernetes/latest/wal_archiving/) documentation. ## Scheduled backups @@ -111,7 +111,7 @@ spec: retentionPolicy: "30d" ``` -For more information, see the [EDB Postgres for Kubernetes retention policies](/postgres_for_kubernetes/latest/backup_recovery/#retention-policies) in the EDB Postgres for Kubernetes documentation. +For more information, see the [EDB Postgres for Kubernetes retention policies](/postgres_for_kubernetes/latest/backup_barmanobjectstore/#retention-policies) in the EDB Postgres for Kubernetes documentation. !!! Important Currently, the retention policy is applied only for the elected `Backup Node` @@ -125,19 +125,19 @@ For more information, see the [EDB Postgres for Kubernetes retention policies](/ ## Compression algorithms Backups and WAL files are uncompressed by default. However, multiple compression algorithms are -supported. For more information, see the [EDB Postgres for Kubernetes compression algorithms](/postgres_for_kubernetes/latest/backup_recovery/#compression-algorithms) documentation. +supported. For more information, see the [EDB Postgres for Kubernetes compression algorithms](/postgres_for_kubernetes/latest/backup_barmanobjectstore/#compression-algorithms) documentation. ## Tagging of backup objects It's possible to specify tags as key-value pairs for the backup objects, namely base backups, WAL files, and history files. -For more information, see the EDB Postgres for Kubernetes documentation about [tagging of backup objects](/postgres_for_kubernetes/latest/backup_recovery/#tagging-of-backup-objects). +For more information, see the EDB Postgres for Kubernetes documentation about [tagging of backup objects](/postgres_for_kubernetes/latest/backup_barmanobjectstore/#tagging-of-backup-objects). ## On-demand backups of a PGD node A PGD node is represented as single-instance EDB Postgres for Kubernetes `Cluster` object. As such, if you need to, it's possible to request an on-demand backup of a specific PGD node by creating a EDB Postgres for Kubernetes `Backup` resource. -To do that, see [EDB Postgres for Kubernetes on-demand backups](/postgres_for_kubernetes/latest/backup_recovery/#on-demand-backups) in the EDB Postgres for Kubernetes documentation. +To do that, see [EDB Postgres for Kubernetes on-demand backups](/postgres_for_kubernetes/latest/backup/#on-demand-backups) in the EDB Postgres for Kubernetes documentation. !!! Hint You can retrieve the list of EDB Postgres for Kubernetes clusters that make up your PGD group diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/installation_upgrade.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/installation_upgrade.mdx index 645db8587d9..249df0b6ccc 100644 --- a/product_docs/docs/postgres_distributed_for_kubernetes/1/installation_upgrade.mdx +++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/installation_upgrade.mdx @@ -57,7 +57,7 @@ Set the environment variables for the `REPOSITORY_NAME` and `REPOSITORY_NAME`: and operand images are stored. Access requires a valid [EDB subscription plan](https://www.enterprisedb.com/products/plans-comparison). - To identify your access credentials, see [Accessing EDB private image registries](private_registries.md). + To identify your access credentials, see [Accessing EDB private image registries](identify_images/private_registries.md). Given that the container images for both the operator and the selected operand are in EDB's private registry, you need your credentials to enable `helm` to diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/openshift.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/openshift.mdx index 1e656a723b5..1e532d145c3 100644 --- a/product_docs/docs/postgres_distributed_for_kubernetes/1/openshift.mdx +++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/openshift.mdx @@ -12,7 +12,7 @@ installed on OpenShift using a web interface. You need access to the private EDB repository where both the operator and operand images are stored. Access requires a valid [EDB subscription plan](https://www.enterprisedb.com/products/plans-comparison). - See [Accessing EDB private image registries](private_registries.md) for details. + See [Accessing EDB private image registries](identify_images/private_registries.md) for details. The OpenShift install uses pull secrets to access the operand and operator images, which are held in a private repository. @@ -40,9 +40,9 @@ oc create secret docker-registry postgresql-operator-pull-secret \ Where: - `@@REPOSITORY@@` is the name of the repository, as explained in [Which repository to - choose?](private_registries.md#which-repository-to-choose). + choose?](identify_images/private_registries.md#which-repository-to-choose). - `@@TOKEN@@` is the repository token for your EDB account, as explained in - [How to retrieve the token](private_registries.md#how-to-retrieve-the-token). + [How to retrieve the token](identify_images/private_registries.md#how-to-retrieve-the-token). ## Installing the operator diff --git a/product_docs/docs/postgres_distributed_for_kubernetes/1/recovery.mdx b/product_docs/docs/postgres_distributed_for_kubernetes/1/recovery.mdx index d2b873ec5ba..1dec3595ad6 100644 --- a/product_docs/docs/postgres_distributed_for_kubernetes/1/recovery.mdx +++ b/product_docs/docs/postgres_distributed_for_kubernetes/1/recovery.mdx @@ -19,7 +19,7 @@ Before recovering from a backup: - When recovering in a newly created namespace, first set up a cert-manager CA issuer before deploying the recovered PGD group. -For more information, see [EDB Postgres for Kubernetes recovery - Additional considerations](/postgres_for_kubernetes/latest/bootstrap/#additional-considerations) in the EDB Postgres for Kubernetes documentation. +For more information, see [EDB Postgres for Kubernetes recovery - Additional considerations](/postgres_for_kubernetes/latest/recovery/#additional-considerations) in the EDB Postgres for Kubernetes documentation. ## Recovery from an object store diff --git a/product_docs/docs/tpa/23/reference/tpaexec-download-packages.mdx b/product_docs/docs/tpa/23/reference/tpaexec-download-packages.mdx index b548d2c526e..08c3fb4b241 100644 --- a/product_docs/docs/tpa/23/reference/tpaexec-download-packages.mdx +++ b/product_docs/docs/tpa/23/reference/tpaexec-download-packages.mdx @@ -24,7 +24,7 @@ are supported. container of the target operating system and uses that system's package manager to resolve dependencies and download all necessary packages. The required Docker setup for download-packages is the same as that for - [using Docker as a deployment platform](#platform-docker). + [using Docker as a deployment platform](../platform-docker/). ## Usage From 2263c56c7b455883245da71d1b5f2de69fc93c53 Mon Sep 17 00:00:00 2001 From: Josh Heyer Date: Tue, 30 Jul 2024 13:19:12 +0000 Subject: [PATCH 13/15] Add missing dependencies file to PEM 8 --- .../docs/pem/8/installing/dependencies.mdx | 154 ++++++++++++++++++ 1 file changed, 154 insertions(+) create mode 100644 product_docs/docs/pem/8/installing/dependencies.mdx diff --git a/product_docs/docs/pem/8/installing/dependencies.mdx b/product_docs/docs/pem/8/installing/dependencies.mdx new file mode 100644 index 00000000000..1356647ae5a --- /dev/null +++ b/product_docs/docs/pem/8/installing/dependencies.mdx @@ -0,0 +1,154 @@ +--- +title: "Dependencies of the PEM Server and Agent on Linux" +navTitle: "Linux dependencies" +redirects: +- /pem/latest/installing_pem_server/pem_server_inst_linux/dependencies/ +--- + +The PEM Server and Agent packages for Linux have dependencies on various system libraries. +These dependencies are detailed below for reference. + +!!! Note +A PEM Agent is always installed alongside PEM Server, so all dependencies must be present on hosts where PEM Server (either the database or the web application) is installed. +!!! + +Typically, PEM is built against the latest version of each dependency available from the vendor repository for a given platform and architecture. +In some cases, PEM requires a newer version of a library than is available in the vendor repository. +In these cases a newer version of the package, prefixed with `edb-` is sourced from EDB's repositories. + +!!! Note +This information is provided for reference. Packages from vendor repositories are not supported or patched by EDB. +Refer to your operating system documentation or support provider for details of these packages. + +Because these dependencies are updated frequently, the tables below are valid only for the latest patch release of PEM. +!!! + +## Python 3 and mod_wsgi + +Python 3 and mod_wsgi (a Python module for Apache HTTPD) are required for PEM Server. + +| Platform | Architecture | Python/mod_wsgi package | Python version | Python path | +|-----------|--------------|----------------------------------------|----------------|------------------------------------------| +| RHEL 7 | x86_64 | `edb-python39/edb-python39-mod-wsgi` | 3.9 | `/usr/libexec/edb-python39/bin/python3` | +| | ppc64le | `edb-python39/edb-python39-mod-wsgi` | 3.9 | `/usr/libexec/edb-python39/bin/python3` | +| RHEL 8 | x86_64 | `edb-python310/edb-python310-mod-wsgi` | 3.10 | `/usr/libexec/edb-python310/bin/python3` | +| | ppc64le | `edb-python310/edb-python310-mod-wsgi` | 3.10 | `/usr/libexec/edb-python310/bin/python3` | +| | s390x | `edb-python310/edb-python310-mod-wsgi` | 3.10 | `/usr/libexec/edb-python310/bin/python3` | +| RHEL 9 | x86_64 | `python39/python39-mod_wsgi` | 3.9 | `/usr/bin/python3` | +| | ppc64le | `python39/python39-mod_wsgi` | 3.9 | `/usr/bin/python3` | +| | s390x | `python39/python39-mod_wsgi` | 3.9 | `/usr/bin/python3` | +| SLES 12 | x86_64 | `edb-python39/edb-python39-mod-wsgi` | 3.9 | `/usr/libexec/edb-python39/bin/python3` | +| | ppc64le | `edb-python39/edb-python39-mod-wsgi` | 3.9 | `/usr/libexec/edb-python39/bin/python3` | +| | s390x | `edb-python39/edb-python39-mod-wsgi` | 3.9 | `/usr/libexec/edb-python39/bin/python3` | +| SLES 15 | x86_64 | `edb-python310/edb-python310-mod-wsgi` | 3.10 | `/usr/libexec/edb-python310/bin/python3` | +| | ppc64le | `edb-python310/edb-python310-mod-wsgi` | 3.10 | `/usr/libexec/edb-python310/bin/python3` | +| | s390x | `edb-python310/edb-python310-mod-wsgi` | 3.10 | `/usr/libexec/edb-python310/bin/python3` | +| Ubuntu 20 | amd64 | `edb-python310/edb-python310-mod-wsgi` | 3.10 | `/usr/libexec/edb-python310/bin/python3` | +| Ubuntu 22 | amd64 | `python310//libapache2-mod-wsgi-py3` | 3.10 | `/usr/bin/python3` | +| Debian 10 | amd64 | `edb-python310/edb-python310-mod-wsgi` | 3.10 | `/usr/libexec/edb-python310/bin/python3` | +| Debian 11 | amd64 | `edb-python310/edb-python310-mod-wsgi` | 3.10 | `/usr/libexec/edb-python310/bin/python3` | +| Debian 12 | amd64/arm64 | `python311/libapache2-mod-wsgi-py3` | 3.11 | `/usr/bin/python3` | + +## OpenSSL + +The PEM Server and Agent require OpenSSL. + +| Platform | Architecture | package-version | +|-----------|--------------|-------------------| +| RHEL 7 | x86_64 | `openssl-1.0.2k` | +| | ppc64le | `openssl-1.0.2k` | +| RHEL 8 | x86_64 | `openssl-1.1.1k` | +| | ppc64le | `openssl-1.1.1k` | +| | s390x | `openssl-1.1.1k` | +| RHEL 9 | x86_64 | `openssl-3.0.7` | +| | ppc64le | `openssl-3.0.7` | +| | s390x | `openssl-3.0.7` | +| SLES 12 | x86_64 | `openssl-1.0.2p` | +| | ppc64le | `openssl-1.0.2p` | +| | s390x | `openssl-1.0.2p` | +| SLES 15 | x86_64 | `openssl-1.1.1l` | +| | ppc64le | `openssl-1.1.1l` | +| | s390x | `openssl-1.1.1l` | +| Ubuntu 20 | amd64 | `openssl-1.1.1f` | +| Ubuntu 22 | amd64 | `openssl-3.0.2` | +| Debian 10 | amd64 | `openssl-1.1.1n` | +| Debian 11 | amd64 | `openssl-1.1.1w` | +| Debian 12 | amd64/arm64 | `openssl-3.0.11` | + +## Libcurl + +The PEM Agent requires libcurl. + +| Platform | Architecture | package-version | +|-----------|--------------|---------------------| +| RHEL 7 | x86_64 | `libcurl-pem-8.4.0` | +| | ppc64le | `libcurl-pem-8.4.0` | +| RHEL 8 | x86_64 | `libcurl-pem-8.4.0` | +| | ppc64le | `curl-7.61.1` | +| | s390x | `curl-7.61.1` | +| RHEL 9 | x86_64 | `curl-7.76.1` | +| | ppc64le | `curl-7.76.1` | +| | s390x | `curl-7.76.1` | +| SLES 12 | x86_64 | `curl-8.0.1` | +| | ppc64le | `curl-8.0.1` | +| | s390x | `curl-8.0.1` | +| SLES 15 | x86_64 | `curl-8.0.1` | +| | ppc64le | `curl-8.0.1` | +| | s390x | `curl-8.0.1` | +| Ubuntu 20 | amd64 | `libcurl4-7.68.0` | +| Ubuntu 22 | amd64 | `libcurl4-7.81.0` | +| Debian 10 | amd64 | `libcurl4-7.64.0` | +| Debian 11 | amd64 | `libcurl4-7.74.0` | +| Debian 12 | amd64/arm64 | `libcurl4-7.88.1` | + +## SNMP++ + +The PEM Agent requires SNMP++. + +| Platform | Architecture | package-version | +|-----------|--------------|---------------------| +| RHEL 7 | x86_64 | `snmp++-3.4.2` | +| | ppc64le | `snmp++-3.4.2` | +| RHEL 8 | x86_64 | `snmp++-3.4.2` | +| | ppc64le | `edb-snmp++-3.4.10` | +| | s390x | `edb-snmp++-3.4.7` | +| RHEL 9 | x86_64 | `edb-snmp++-3.4.10` | +| | ppc64le | `edb-snmp++-3.4.10` | +| | s390x | `edb-snmp++-3.4.10` | +| SLES 12 | x86_64 | `edb-snmp++-3.4.10` | +| | ppc64le | `edb-snmp++-3.4.10` | +| | s390x | `edb-snmp++-3.4.7` | +| SLES 15 | x86_64 | `edb-snmp++-3.4.10` | +| | ppc64le | `edb-snmp++-3.4.10` | +| | s390x | `edb-snmp++-3.4.7` | +| Ubuntu 20 | amd64 | `edb-snmp++-3.4.10` | +| Ubuntu 22 | amd64 | `edb-snmp++-3.4.10` | +| Debian 10 | amd64 | `edb-snmp++-3.4.10` | +| Debian 11 | amd64 | `edb-snmp++-3.4.10` | +| Debian 12 | amd64/arm64 | `edb-snmp++-3.4.10` | + +## Boost libraries + +The PEM Agent requires the Boost libraries. + +| Platform | Architecture | package-version | +|-----------|--------------|--------------------------------| +| RHEL 7 | x86_64 | `boost169-system-1.69.0` | +| | ppc64le | `None boost package` | +| RHEL 8 | x86_64 | `boost169-system-1.69.0` | +| | ppc64le | `boost-system-1.66.0` | +| | s390x | `boost-system-1.66.0` | +| RHEL 9 | x86_64 | `boost-system-1.75.0` | +| | ppc64le | `boost-system-1.75.0` | +| | s390x | `boost-system-1.75.0` | +| SLES 12 | x86_64 | `libboost_system1_54_0-1.54.0` | +| | ppc64le | `libboost_system1_54_0-1.54.0` | +| | s390x | `libboost_system1_54_0-1.54.0` | +| SLES 15 | x86_64 | `libboost_regex1_66_1-1.66.0` | +| | ppc64le | `libboost_regex1_66_1-1.66.0` | +| | s390x | `libboost_regex1_66_1-1.66.0` | +| Ubuntu 20 | amd64 | `libboost-system1.71.0-1.71.0` | +| Ubuntu 22 | amd64 | `libboost-system1.74.0-1.74.0` | +| Debian 10 | amd64 | `libboost-system1.67.0-1.67.0` | +| Debian 11 | amd64 | `libboost-system1.74.0-1.74.0` | +| Debian 12 | amd64/arm64 | `libboost-system1.74.0-1.74.0` | \ No newline at end of file From 3519f2a8213b3dfb91b03f4109222522d7f298f2 Mon Sep 17 00:00:00 2001 From: Josh Heyer Date: Tue, 30 Jul 2024 15:19:00 +0000 Subject: [PATCH 14/15] Added local execution framework for link checker --- docker/docker-compose.check-links.yaml | 20 +++++++++++++++++++ package.json | 2 ++ tools/automation/actions/link-check/index.js | 10 +++++----- .../actions/link-check/package-lock.json | 1 - .../actions/link-check/package.json | 1 - 5 files changed, 27 insertions(+), 7 deletions(-) create mode 100644 docker/docker-compose.check-links.yaml diff --git a/docker/docker-compose.check-links.yaml b/docker/docker-compose.check-links.yaml new file mode 100644 index 00000000000..686f967c8b6 --- /dev/null +++ b/docker/docker-compose.check-links.yaml @@ -0,0 +1,20 @@ +services: + docs-link-checker: + build: + context: ../tools/automation/actions/link-check + dockerfile_inline: | + FROM node:20-alpine + COPY . /app + WORKDIR /app + RUN npm i + + container_name: docs-link-check + hostname: docs-link-check + working_dir: /app + command: sh -c "npm ci --loglevel=error && node index.js /app" + volumes: + - ../tools/automation/actions/link-check/index.js:/app/index.js:ro + - ../tools/automation/actions/link-check/package-lock.json:/app/package-lock.json:ro + - ../tools/automation/actions/link-check/package.json:/app/package.json:ro + - ../product_docs/docs:/app/product_docs/docs + - ../advocacy_docs:/app/advocacy_docs diff --git a/package.json b/package.json index 67440a2eefd..143a0429fdd 100644 --- a/package.json +++ b/package.json @@ -25,6 +25,8 @@ "pdf:build-all": "for i in product_docs/docs/**/*/ ; do echo \"$i\"; npm run pdf:build ${i%} || exit 1; done", "pdf:build-all-ci": "for i in product_docs/docs/**/*/ ; do echo \"$i\"; python3 scripts/pdf/generate_pdf.py ${i%} || exit 1; done", "pdf:rebuild-docker-image": "docker compose -f docker/docker-compose.build-pdf.yaml build --pull --no-cache", + "links:check": "docker compose -f docker/docker-compose.check-links.yaml run --rm docs-link-checker", + "links:rebuild-docker-image": "docker compose -f docker/docker-compose.check-links.yaml build --pull --no-cache", "prepare": "./scripts/husky-install.sh", "presetup": "./scripts/npm-preinstall.sh", "serve-build": "gatsby serve --prefix-paths", diff --git a/tools/automation/actions/link-check/index.js b/tools/automation/actions/link-check/index.js index 04e40eae71f..5e3ad664fb3 100644 --- a/tools/automation/actions/link-check/index.js +++ b/tools/automation/actions/link-check/index.js @@ -22,10 +22,10 @@ const noWarnPaths = [ "/playground/1/01_examples/link-tests", "/playground/1/01_examples/link-test", ]; -const basePath = path.resolve( - path.dirname(new URL(import.meta.url).pathname), - "../../../..", -); +const args = process.argv.slice(2); +const basePath = + args[0] || + path.resolve(path.dirname(new URL(import.meta.url).pathname), "../../../.."); let ghCore = core; @@ -249,7 +249,7 @@ Files updated: **${filesUpdated}**`); ghCore.summary.addRaw(` **${linksUpdated}** links could be updated to avoid redirects; -run \`node tools/automation/actions/link-check\` locally.`); +run \`npm run links:check\` locally.`); ghCore.summary.write(); diff --git a/tools/automation/actions/link-check/package-lock.json b/tools/automation/actions/link-check/package-lock.json index fb516252e98..33057da17af 100644 --- a/tools/automation/actions/link-check/package-lock.json +++ b/tools/automation/actions/link-check/package-lock.json @@ -12,7 +12,6 @@ "@actions/github": "^6.0.0", "fast-glob": "^3.2.12", "github-slugger": "^1.5.0", - "hast-util-to-html": "^7.1.3", "html-void-elements": "^2.0.1", "is-absolute-url": "^3.0.3", "js-yaml": "^4.1.0", diff --git a/tools/automation/actions/link-check/package.json b/tools/automation/actions/link-check/package.json index 8588347ba8b..a85386017e7 100644 --- a/tools/automation/actions/link-check/package.json +++ b/tools/automation/actions/link-check/package.json @@ -13,7 +13,6 @@ "@actions/github": "^6.0.0", "fast-glob": "^3.2.12", "github-slugger": "^1.5.0", - "hast-util-to-html": "^7.1.3", "html-void-elements": "^2.0.1", "is-absolute-url": "^3.0.3", "js-yaml": "^4.1.0", From df2bda958c1a2c5258ae2e2a0cba5b3df26e77bf Mon Sep 17 00:00:00 2001 From: Dj Walker-Morgan Date: Tue, 30 Jul 2024 18:10:59 +0100 Subject: [PATCH 15/15] review fixes (links adrift) Signed-off-by: Dj Walker-Morgan --- .../edb-postgres-ai/ai-ml/install-tech-preview.mdx | 2 +- .../edb-postgres-ai/ai-ml/using-tech-preview/index.mdx | 9 ++++++--- 2 files changed, 7 insertions(+), 4 deletions(-) diff --git a/advocacy_docs/edb-postgres-ai/ai-ml/install-tech-preview.mdx b/advocacy_docs/edb-postgres-ai/ai-ml/install-tech-preview.mdx index 2f4f1e1eaa4..fb4bab5791e 100644 --- a/advocacy_docs/edb-postgres-ai/ai-ml/install-tech-preview.mdx +++ b/advocacy_docs/edb-postgres-ai/ai-ml/install-tech-preview.mdx @@ -11,7 +11,7 @@ The preview release of aidb is distributed as a self-contained Docker container If you haven't already, sign up for an EDB account and log in to the EDB container registry. -Log in to Docker with the username tech-preview and your EDB Repo 2.0 subscription token as your password: +Log in to Docker with the username tech-preview and your EDB Repos 2.0 subscription token as your password: ```shell docker login docker.enterprisedb.com -u tech-preview -p diff --git a/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/index.mdx b/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/index.mdx index 75f2519845a..e5cf52f6c2d 100644 --- a/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/index.mdx +++ b/advocacy_docs/edb-postgres-ai/ai-ml/using-tech-preview/index.mdx @@ -8,10 +8,13 @@ navigation: - additional_functions --- -This section shows how you can use your [newly installed aidb tech preview](install-tech-preview) to retrieve and generate AI data in Postgres. +This section shows how you can use your [newly installed aidb tech preview](../install-tech-preview) to retrieve and generate AI data in Postgres. * [Working with AI data in Postgres](working-with-ai-data-in-postgres) details how to use the aidb extension to work with AI data stored in Postgres tables. -* [Working with AI data in S3](working-with-ai-data-in-s3) covers how to use the aidb extension to work with AI data stored in S3 compatible object storage. -* [Standard encoders](standard-encoders) goes through the standard encoder LLMs that are supported by the aidb extension. + +* [Working with AI data in S3](working-with-ai-data-in-S3) covers how to use the aidb extension to work with AI data stored in S3 compatible object storage. + +* [Additional functions](additional_functions) notes other aidb extension functions and how to generate standalone embeddings for images and text. +