diff --git a/docs/modules/ROOT/pages/list-of-metrics.adoc b/docs/modules/ROOT/pages/list-of-metrics.adoc index da22df11b..a1a5dfbea 100644 --- a/docs/modules/ROOT/pages/list-of-metrics.adoc +++ b/docs/modules/ROOT/pages/list-of-metrics.adoc @@ -202,7 +202,7 @@ be 1 under low load. |numInFlightOps |The number of pending (in flight) operations when using -asynchronous mapping processors. See https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/core/processor/Processors.html#mapUsingServiceAsyncP-com.hazelcast.jet.pipeline.ServiceFactory-int-boolean-com.hazelcast.function.FunctionEx-com.hazelcast.function.BiFunctionEx-[Processors.mapUsingServiceAsyncP]. +asynchronous mapping processors. See https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/core/processor/Processors.html#mapUsingServiceAsyncP-com.hazelcast.jet.pipeline.ServiceFactory-int-boolean-com.hazelcast.function.FunctionEx-com.hazelcast.function.BiFunctionEx-[Processors.mapUsingServiceAsyncP]. .6+|_job, exec, vertex, proc, procType_ Processor specific metrics, only certain types of processors @@ -220,7 +220,7 @@ processor. |totalWindows |The number of active windows being tracked by a session window processor. See -https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/core/processor/Processors.html#aggregateToSessionWindowP-long-long-java.util.List-java.util.List-com.hazelcast.jet.aggregate.AggregateOperation-com.hazelcast.jet.core.function.KeyedWindowResultFunction-[Processors.aggregateToSessionWindowP]. +https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/core/processor/Processors.html#aggregateToSessionWindowP-long-long-java.util.List-java.util.List-com.hazelcast.jet.aggregate.AggregateOperation-com.hazelcast.jet.core.function.KeyedWindowResultFunction-[Processors.aggregateToSessionWindowP]. |totalFrames @@ -230,7 +230,7 @@ https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/core/pr |totalKeysInFrames |The number of grouping keys associated with the current active frames of a sliding window processor. See -https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/core/processor/Processors.html#aggregateToSlidingWindowP-java.util.List-java.util.List-com.hazelcast.jet.core.TimestampKind-com.hazelcast.jet.core.SlidingWindowPolicy-long-com.hazelcast.jet.aggregate.AggregateOperation-com.hazelcast.jet.core.function.KeyedWindowResultFunction-[Processors.aggregateToSlidingWindowP]. +https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/core/processor/Processors.html#aggregateToSlidingWindowP-java.util.List-java.util.List-com.hazelcast.jet.core.TimestampKind-com.hazelcast.jet.core.SlidingWindowPolicy-long-com.hazelcast.jet.aggregate.AggregateOperation-com.hazelcast.jet.core.function.KeyedWindowResultFunction-[Processors.aggregateToSlidingWindowP]. |lateEventsDropped diff --git a/docs/modules/clients/pages/java.adoc b/docs/modules/clients/pages/java.adoc index 592726484..17ff5da65 100644 --- a/docs/modules/clients/pages/java.adoc +++ b/docs/modules/clients/pages/java.adoc @@ -1,5 +1,5 @@ = Java Client -:page-api-reference: https://docs.hazelcast.org/docs/{page-latest-supported-java-client}/javadoc +:page-api-reference: https://docs.hazelcast.org/docs/{os-version}/javadoc :page-toclevels: 1 :page-aliases: security:native-client-security.adoc :description: Hazelcast provides a {java-client} within the standard distribution you can start using right away, and also a lightweight {java-client-new} that is available in Beta. @@ -19,7 +19,7 @@ NOTE: Where there are differences between {java-client} and {java-client-new}, t Both clients enable you to use the Hazelcast API — this topic explains any differences or technical details that affect usage. Read this alongside the respective Javadoc-generated API documentation available from within your IDE together with the following: -* https://docs.hazelcast.org/docs/{page-latest-supported-java-client}/javadoc[Hazelcast {java-client} API documentation] +* https://docs.hazelcast.org/docs/{os-version}/javadoc[Hazelcast {java-client} API documentation] * https://docs.hazelcast.org/hazelcast-java-client/{page-latest-supported-java-client-new}/javadoc[Hazelcast {java-client-new} API documentation] == Get started @@ -508,7 +508,7 @@ security: For more information, see the appropriate API documentation for your client: -* https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/client/config/ClientSecurityConfig.html[{java-client-new} ClientSecurityConfig API documentation] +* https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/client/config/ClientSecurityConfig.html[{java-client-new} ClientSecurityConfig API documentation] * https://docs.hazelcast.org/hazelcast-java-client/{page-latest-supported-java-client-new}/javadoc/com/hazelcast/client/config/ClientSecurityConfig.html[{java-client} ClientSecurityConfig API documentation] [[classloader]] diff --git a/docs/modules/cluster-performance/pages/data-affinity.adoc b/docs/modules/cluster-performance/pages/data-affinity.adoc index 1281ac056..d0658d4eb 100644 --- a/docs/modules/cluster-performance/pages/data-affinity.adoc +++ b/docs/modules/cluster-performance/pages/data-affinity.adoc @@ -251,7 +251,7 @@ psConfig.setPartitioningStrategy(YourCustomPartitioningStrategy); <1> ... ---- ==== -<1> You can define your own partition strategy by implementing the class https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/partition/PartitioningStrategy.html[`PartitioningStrategy`]. To enable your implementation, add the full class name to your Hazelcast configuration using either +<1> You can define your own partition strategy by implementing the class https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/partition/PartitioningStrategy.html[`PartitioningStrategy`]. To enable your implementation, add the full class name to your Hazelcast configuration using either the declarative or programmatic approach, as shown above. NOTE: All the cluster members must have the same partitioning strategy configurations. diff --git a/docs/modules/cluster-performance/pages/imap-bulk-read-operations.adoc b/docs/modules/cluster-performance/pages/imap-bulk-read-operations.adoc index 96fc15ea5..034b5d93c 100644 --- a/docs/modules/cluster-performance/pages/imap-bulk-read-operations.adoc +++ b/docs/modules/cluster-performance/pages/imap-bulk-read-operations.adoc @@ -23,7 +23,7 @@ NOTE: To help you to monitor this potential problem, client invocations of certa The threshold for logging large results is defined by the Hazelcast property `hazelcast.expensive.imap.invocation.reporting.threshold`, which has a default value of `100` results. -The relevant methods are listed in the https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/spi/properties/ClusterProperty.html#EXPENSIVE_IMAP_INVOCATION_REPORTING_THRESHOLD[Javadocs^]. +The relevant methods are listed in the https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/spi/properties/ClusterProperty.html#EXPENSIVE_IMAP_INVOCATION_REPORTING_THRESHOLD[Javadocs^]. == Plan capacity Proper capacity planning is crucial for providing diff --git a/docs/modules/cluster-performance/pages/near-cache.adoc b/docs/modules/cluster-performance/pages/near-cache.adoc index b73d49570..b9a8fea23 100644 --- a/docs/modules/cluster-performance/pages/near-cache.adoc +++ b/docs/modules/cluster-performance/pages/near-cache.adoc @@ -185,7 +185,7 @@ include::ROOT:example$/performance/ExampleNearCacheConfiguration.java[tag=nearca The element `` has an optional attribute `name` with a default value of `default`. -The class https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/config/NearCacheConfig.html[NearCacheConfig^] +The class https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/config/NearCacheConfig.html[NearCacheConfig^] is used for all supported Hazelcast data structures on members and clients. The following are the descriptions of all configuration elements and attributes: diff --git a/docs/modules/cluster-performance/pages/thread-per-core-tpc.adoc b/docs/modules/cluster-performance/pages/thread-per-core-tpc.adoc index 906ee81ae..b085cb955 100644 --- a/docs/modules/cluster-performance/pages/thread-per-core-tpc.adoc +++ b/docs/modules/cluster-performance/pages/thread-per-core-tpc.adoc @@ -15,7 +15,7 @@ A Thread-Per-Core (TPC) design uses one thread for every CPU core and every thre When TPC is enabled, clients using the `ALL_MEMBERS` cluster routing mode connect to the number of cores specified in the client config. If TPC is enabled on both client and server, clients connect directly to one of the TPC threads instead of using the legacy network threads. TPC-enabled servers continue to use the same ports for discovery, which means there's no difference in how the cluster member list is created and TPC-aware servers are backward compatible with clients that don't use TPC. -NOTE: Clients using the `SINGLE_MEMBER` or `MULTI_MEMBER` cluster routing modes cannot use TPC. Your clients must use the `ALL_MEMBERS` cluster routing mode with TPC. For further information on the `ClientTpcConfig` class used to specify the number of cores to use for connections, refer to https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/client/config/ClientTpcConfig.html[Class ClientTpcConfig] in the Java API documentation. +NOTE: Clients using the `SINGLE_MEMBER` or `MULTI_MEMBER` cluster routing modes cannot use TPC. Your clients must use the `ALL_MEMBERS` cluster routing mode with TPC. For further information on the `ClientTpcConfig` class used to specify the number of cores to use for connections, refer to https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/client/config/ClientTpcConfig.html[Class ClientTpcConfig] in the Java API documentation. [[tpc-config]] == Configuration Options diff --git a/docs/modules/clusters/pages/ucn-security.adoc b/docs/modules/clusters/pages/ucn-security.adoc index 064f26a0f..7d5d0f8e9 100644 --- a/docs/modules/clusters/pages/ucn-security.adoc +++ b/docs/modules/clusters/pages/ucn-security.adoc @@ -7,6 +7,6 @@ Permissions are set using the `UserCodeNamespacePermission` class, which extends the `InstancePermission` class. -For further information on the `UserCodeNamespacePermission` class, refer to https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/security/permission/UserCodeNamespacePermission.html[Class UserCodeNamespacePermission^] in the Java API documentation. +For further information on the `UserCodeNamespacePermission` class, refer to https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/security/permission/UserCodeNamespacePermission.html[Class UserCodeNamespacePermission^] in the Java API documentation. For further information on client permissions with {ucn}, see the xref:security:client-authorization.adoc[] topic. diff --git a/docs/modules/computing/pages/durable-executor-service.adoc b/docs/modules/computing/pages/durable-executor-service.adoc index 977ee0eef..1b5ab2692 100644 --- a/docs/modules/computing/pages/durable-executor-service.adoc +++ b/docs/modules/computing/pages/durable-executor-service.adoc @@ -94,7 +94,7 @@ The following is a list of methods, grouped by the operations, that support spli **Configuring Split-Brain Protection** -Split-brain protection for Durable Executor Service can be configured programmatically using the method https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/config/DurableExecutorConfig.html[setSplitBrainProtectionName()^], or declaratively using the element `split-brain-protection-ref`. Following is an example declarative configuration: +Split-brain protection for Durable Executor Service can be configured programmatically using the method https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/config/DurableExecutorConfig.html[setSplitBrainProtectionName()^], or declaratively using the element `split-brain-protection-ref`. Following is an example declarative configuration: [tabs] ==== diff --git a/docs/modules/computing/pages/executor-service.adoc b/docs/modules/computing/pages/executor-service.adoc index e931a1940..d41cea21f 100644 --- a/docs/modules/computing/pages/executor-service.adoc +++ b/docs/modules/computing/pages/executor-service.adoc @@ -309,7 +309,7 @@ The following is a list of methods, grouped by the operations, that support spli **Configuring Split-Brain Protection** -Split-brain protection for Executor Service can be configured programmatically using the method https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/config/ExecutorConfig.html[setSplitBrainProtectionName()^], or declaratively using the element `split-brain-protection-ref`. Following is an example declarative configuration: +Split-brain protection for Executor Service can be configured programmatically using the method https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/config/ExecutorConfig.html[setSplitBrainProtectionName()^], or declaratively using the element `split-brain-protection-ref`. Following is an example declarative configuration: [tabs] ==== diff --git a/docs/modules/computing/pages/scheduled-executor-service.adoc b/docs/modules/computing/pages/scheduled-executor-service.adoc index 0317bf802..50c74e65e 100644 --- a/docs/modules/computing/pages/scheduled-executor-service.adoc +++ b/docs/modules/computing/pages/scheduled-executor-service.adoc @@ -13,7 +13,7 @@ On top of the Vanilla Scheduling API, IScheduledExecutorService allows additiona * `scheduleOnAllMembers`: On all cluster members. * `scheduleOnMembers`: On all given members. -See the https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/scheduledexecutor/IScheduledExecutorService.html[IScheduledExecutorService Javadoc^] for its API details. +See the https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/scheduledexecutor/IScheduledExecutorService.html[IScheduledExecutorService Javadoc^] for its API details. There are two different modes of durability for the service: @@ -27,7 +27,7 @@ The name of the task can be user-defined if it needs to be, by implementing the Upon scheduling, the service returns an `IScheduledFuture`, which on top of the `java.util.concurrent.ScheduledFuture` functionality, provides an API to get the resource handler of the task `ScheduledTaskHandler` and also the runtime statistics of the task. -Futures associated with a scheduled task, in order to be aware of lost partitions and/or members, act as listeners on the local member/client. Therefore, they are always strongly referenced, on the member/client side. In order to clean up their resources, once completed, you can use the method `dispose()`. This method also cancels further executions of the task if scheduled at a fixed rate. See the https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/scheduledexecutor/IScheduledFuture.html[IScheduledFuture Javadoc^] for its API details. +Futures associated with a scheduled task, in order to be aware of lost partitions and/or members, act as listeners on the local member/client. Therefore, they are always strongly referenced, on the member/client side. In order to clean up their resources, once completed, you can use the method `dispose()`. This method also cancels further executions of the task if scheduled at a fixed rate. See the https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/scheduledexecutor/IScheduledFuture.html[IScheduledFuture Javadoc^] for its API details. The task handler is a descriptor class holding information for the scheduled future, which is used to pinpoint the actual task in the cluster. It contains the name of the task, the owner (member or partition) and the scheduler name. @@ -168,7 +168,7 @@ The following is a list of methods, grouped by the operations, that support spli **Configuring Split-Brain Protection** -Split-brain protection for Scheduled Executor Service can be configured programmatically using the method https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/config/ScheduledExecutorConfig.html[setSplitBrainProtectionName()^], or declaratively using the element `split-brain-protection-ref`. Following is an example declarative configuration: +Split-brain protection for Scheduled Executor Service can be configured programmatically using the method https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/config/ScheduledExecutorConfig.html[setSplitBrainProtectionName()^], or declaratively using the element `split-brain-protection-ref`. Following is an example declarative configuration: [tabs] ==== diff --git a/docs/modules/configuration/pages/configuring-declaratively.adoc b/docs/modules/configuration/pages/configuring-declaratively.adoc index 58976ff6f..8a094b117 100644 --- a/docs/modules/configuration/pages/configuring-declaratively.adoc +++ b/docs/modules/configuration/pages/configuring-declaratively.adoc @@ -288,7 +288,7 @@ HazelcastInstance hz = Hazelcast.newHazelcastInstance(config); Variable replacers are used to replace custom strings during startup when a cluster first loads a configuration file. For example, you can use a variable replacer to mask sensitive information such as usernames and passwords. Variable replacers implement the interface `com.hazelcast.config.replacer.spi.ConfigReplacer`. For basic information about how a replacer works, see the -https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/config/replacer/spi/ConfigReplacer.html[Javadoc^]. +https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/config/replacer/spi/ConfigReplacer.html[Javadoc^]. [tabs] ==== diff --git a/docs/modules/configuration/pages/configuring-programmatically.adoc b/docs/modules/configuration/pages/configuring-programmatically.adoc index ca2d4c759..913323f86 100644 --- a/docs/modules/configuration/pages/configuring-programmatically.adoc +++ b/docs/modules/configuration/pages/configuring-programmatically.adoc @@ -26,7 +26,7 @@ xref:configuring-declaratively.adoc[Configuration files] allow you to store memb To use a configuration file to configure members, you can use one of the following: -- link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/config/Config.html#setConfigurationFile-java.io.File-[`Config.setConfigurationFile()`] +- link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/config/Config.html#setConfigurationFile-java.io.File-[`Config.setConfigurationFile()`] - <> @@ -42,23 +42,23 @@ Use one of the following methods to load a configuration file into a config obje |=== |Method|Description -|link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/config/Config.html#loadFromFile-java.io.File-java.util.Properties-[`Config.loadFromFile()`] +|link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/config/Config.html#loadFromFile-java.io.File-java.util.Properties-[`Config.loadFromFile()`] |Creates a configuration object based on a Hazelcast configuration file (XML or YAML). -|link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/config/Config.html#loadDefault-java.util.Properties-[`Config.loadDefault()`] +|link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/config/Config.html#loadDefault-java.util.Properties-[`Config.loadDefault()`] |Loads `Config` using the default lookup mechanism to locate the configuration file. Loads the nested Hazelcast configuration also by using its default lookup mechanism. -|link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/config/Config.html#loadFromClasspath-java.lang.ClassLoader-java.lang.String-java.util.Properties-[`Config.loadFromClasspath()`] +|link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/config/Config.html#loadFromClasspath-java.lang.ClassLoader-java.lang.String-java.util.Properties-[`Config.loadFromClasspath()`] |Creates `Config` which is loaded from a classpath resource. -|link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/config/Config.html#loadFromString-java.lang.String-java.util.Properties-[`Config.loadFromString()`] +|link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/config/Config.html#loadFromString-java.lang.String-java.util.Properties-[`Config.loadFromString()`] |Creates `Config` from the provided XML or YAML string content. -|link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/config/Config.html#loadFromStream-java.io.InputStream-java.util.Properties-[`Config.loadFromStream()`] +|link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/config/Config.html#loadFromStream-java.io.InputStream-java.util.Properties-[`Config.loadFromStream()`] |Creates `Config` from the provided XML or YAML stream content. -|link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/config/XmlConfigBuilder.html#0[`XMLConfigBuilder()`] +|link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/config/XmlConfigBuilder.html#0[`XMLConfigBuilder()`] |Builds a `Config` that does not apply overrides in environment variables or system properties. |=== @@ -76,26 +76,26 @@ XML:: + -- -- link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/config/XmlConfigBuilder.html[`xmlConfigBuilder`] -- link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/config/ClasspathXmlConfig.html[`ClasspathXmlConfig`] +- link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/config/XmlConfigBuilder.html[`xmlConfigBuilder`] +- link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/config/ClasspathXmlConfig.html[`ClasspathXmlConfig`] -- link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/config/FileSystemXmlConfig.html[`FileSystemXmlConfig`] +- link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/config/FileSystemXmlConfig.html[`FileSystemXmlConfig`] -- link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/config/UrlXmlConfig.html[`UrlXmlConfig`] +- link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/config/UrlXmlConfig.html[`UrlXmlConfig`] -- link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/config/InMemoryXmlConfig.html[`InMemoryXmlConfig`] +- link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/config/InMemoryXmlConfig.html[`InMemoryXmlConfig`] -- YAML:: + -- -- link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/config/YamlConfigBuilder.html[`yamlConfigBuilder`] -- link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/config/ClasspathYamlConfig.html[`ClasspathYamlConfig`] +- link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/config/YamlConfigBuilder.html[`yamlConfigBuilder`] +- link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/config/ClasspathYamlConfig.html[`ClasspathYamlConfig`] -- link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/config/FileSystemYamlConfig.html[`FileSystemYamlConfig`] +- link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/config/FileSystemYamlConfig.html[`FileSystemYamlConfig`] -- link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/config/UrlYamlConfig.html[`UrlYamlConfig`] +- link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/config/UrlYamlConfig.html[`UrlYamlConfig`] -- link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/config/InMemoryYamlConfig.html[`InMemoryYamlConfig`] +- link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/config/InMemoryYamlConfig.html[`InMemoryYamlConfig`] -- ==== diff --git a/docs/modules/data-connections/pages/data-connection-service.adoc b/docs/modules/data-connections/pages/data-connection-service.adoc index 708158071..c64439c8e 100644 --- a/docs/modules/data-connections/pages/data-connection-service.adoc +++ b/docs/modules/data-connections/pages/data-connection-service.adoc @@ -34,20 +34,20 @@ jdbcDataConnection.release(); <4> == Retrieve Data Connection service Before working with data connections you need to retrieve an instance of the `DataConnectionService`. Use -https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/core/HazelcastInstance.html#getDataConnectionService()[`HazelcastInstance#getDataConnectionService()`] +https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/core/HazelcastInstance.html#getDataConnectionService()[`HazelcastInstance#getDataConnectionService()`] to obtain an instance of `DataConnectionService`. You can implement HazelcastInstanceAware in listeners, entry processors, tasks etc. to get access to the `HazelcastInstance`. In the pipeline API you can use -https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/core/ProcessorMetaSupplier.Context.html#dataConnectionService()[ProcessorMetaSupplier.Context#dataConnectionService()]. +https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/core/ProcessorMetaSupplier.Context.html#dataConnectionService()[ProcessorMetaSupplier.Context#dataConnectionService()]. NOTE: The Data Connection Service is only available on the member side. Calling `getDataConnectionService()` on a client results in `UnsupportedOperationException`. == Retrieve configured DataConnection -Use the `DataConnectionService` to get an instance of previously configured data connection https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/dataconnection/DataConnectionService.html#getAndRetainDataConnection(java.lang.String,java.lang.Class)[DataConnectionService#getAndRetainDataConnection(String, Class)]. For details how to configure a data connection, please refer +Use the `DataConnectionService` to get an instance of previously configured data connection https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/dataconnection/DataConnectionService.html#getAndRetainDataConnection(java.lang.String,java.lang.Class)[DataConnectionService#getAndRetainDataConnection(String, Class)]. For details how to configure a data connection, please refer to the xref:data-connections-configuration.adoc[Configuring Data Connections] page. == Data Connection scope diff --git a/docs/modules/data-connections/pages/data-connections-configuration.adoc b/docs/modules/data-connections/pages/data-connections-configuration.adoc index 1e37b1fc8..2e9bfa233 100644 --- a/docs/modules/data-connections/pages/data-connections-configuration.adoc +++ b/docs/modules/data-connections/pages/data-connections-configuration.adoc @@ -253,7 +253,7 @@ include::mongo-dc-configuration.adoc[] === Example Hazelcast Data Connection This example configuration shows a data connection to a remote Hazelcast cluster. -You can use a Hazelcast data connection from the Pipeline API in link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/pipeline/Sources.html#remoteMapJournal-java.lang.String-com.hazelcast.jet.pipeline.DataConnectionRef-com.hazelcast.jet.pipeline.JournalInitialPosition-com.hazelcast.function.FunctionEx-com.hazelcast.function.PredicateEx-[Sources#remoteMapJournal] source. +You can use a Hazelcast data connection from the Pipeline API in link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/pipeline/Sources.html#remoteMapJournal-java.lang.String-com.hazelcast.jet.pipeline.DataConnectionRef-com.hazelcast.jet.pipeline.JournalInitialPosition-com.hazelcast.function.FunctionEx-com.hazelcast.function.PredicateEx-[Sources#remoteMapJournal] source. NOTE: Currently, no SQL connector is available for Hazelcast data connections. This means that although you can xref:sql:create-data-connection.adoc[create a data connection in SQL], you cannot yet use it in SQL, for example, in a mapping statement. @@ -391,7 +391,7 @@ You can specify an external file with the `client_xml_path` or `client_yml_path` Data connections have the following configuration options. -NOTE: If you are using Java to configure the Mapstore, use the link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/config/DataConnectionConfig.html[`DataConnectionConfig` object]. +NOTE: If you are using Java to configure the Mapstore, use the link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/config/DataConnectionConfig.html[`DataConnectionConfig` object]. .Data connection configuration options [cols="1a,1a",options="header"] @@ -408,7 +408,7 @@ NOTE: If you are using Java to configure the Mapstore, use the link:https://docs |Any configuration properties that the data connection expects to receive. |`shared` -|Whether the data connection instance is reusable in different MapStores, jobs, and SQL mappings. This behavior depends on the implementation of the specific data connection. The default value is `true`. See the implementation of each data connection type for full details of reusability: link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/kafka/KafkaDataConnection.html[`KafkaDataConnection`], link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/mongodb/dataconnection/MongoDataConnection.html[`MongoDataConnection`], link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/dataconnection/HazelcastDataConnection.html[`HazelcastDataConnection`]. +|Whether the data connection instance is reusable in different MapStores, jobs, and SQL mappings. This behavior depends on the implementation of the specific data connection. The default value is `true`. See the implementation of each data connection type for full details of reusability: link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/kafka/KafkaDataConnection.html[`KafkaDataConnection`], link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/mongodb/dataconnection/MongoDataConnection.html[`MongoDataConnection`], link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/dataconnection/HazelcastDataConnection.html[`HazelcastDataConnection`]. |=== diff --git a/docs/modules/data-structures/pages/cardinality-estimator-service.adoc b/docs/modules/data-structures/pages/cardinality-estimator-service.adoc index 2f54e998c..a5d2d66f9 100644 --- a/docs/modules/data-structures/pages/cardinality-estimator-service.adoc +++ b/docs/modules/data-structures/pages/cardinality-estimator-service.adoc @@ -33,7 +33,7 @@ NOTE: Objects must be serializable in a form that Hazelcast understands. + . Compute the estimate of the set so far `estimator.estimate()`. -See the https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/cardinality/CardinalityEstimator.html[cardinality estimator Javadoc^] +See the https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/cardinality/CardinalityEstimator.html[cardinality estimator Javadoc^] for more information about its API. The following is an example code. @@ -63,7 +63,7 @@ split-brain protection checks: **Configuring Split-Brain Protection** Split-brain protection for Cardinality Estimator can be configured -programmatically using the method https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/config/CardinalityEstimatorConfig.html[setSplitBrainProtectionName()^], +programmatically using the method https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/config/CardinalityEstimatorConfig.html[setSplitBrainProtectionName()^], or declaratively using the element `split-brain-protection-ref`. Following is an example declarative configuration: @@ -107,7 +107,7 @@ merge policy. When an estimator merges into the cluster, an estimator with the same name might already exist in the cluster. So the merge policy resolves these kinds of conflicts with different out-of-the-box strategies. It can be configured programmatically using the method -https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/config/CardinalityEstimatorConfig.html[setMergePolicyConfig()^], +https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/config/CardinalityEstimatorConfig.html[setMergePolicyConfig()^], or declaratively using the element `merge-policy`. Following is an example declarative configuration: diff --git a/docs/modules/data-structures/pages/entry-processor.adoc b/docs/modules/data-structures/pages/entry-processor.adoc index 83566bff3..0ffee40c3 100644 --- a/docs/modules/data-structures/pages/entry-processor.adoc +++ b/docs/modules/data-structures/pages/entry-processor.adoc @@ -41,7 +41,7 @@ NOTE: When `in-memory-format` is `OBJECT`, the old value of the updated entry wi === Processing Entries -The https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/map/IMap.html[IMap interface^] provides the following methods for entry processing: +The https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/map/IMap.html[IMap interface^] provides the following methods for entry processing: * `executeOnKey` processes an entry mapped by a key, blocking until the processing is complete and the result is returned. * `executeOnKeys` processes entries mapped by a collection of keys, blocking until the processing is complete and the results are returned. @@ -49,9 +49,9 @@ The https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/map/IMa * `executeOnEntries` processes all entries in a map, blocking until the processing is complete and the results are returned. * `executeOnEntries` also processes all entries in a map matching the provided predicate, blocking until the processing is complete and the results are returned. -When using the `executeOnEntries` method, if the number of entries is high and you do not need the results, then returning null with the `process()` method is a good practice. This method is offered by the https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/map/EntryProcessor.html[EntryProcessor interface^]. By returning null, results of the processing are not collected and thus out of memory errors are eliminated. +When using the `executeOnEntries` method, if the number of entries is high and you do not need the results, then returning null with the `process()` method is a good practice. This method is offered by the https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/map/EntryProcessor.html[EntryProcessor interface^]. By returning null, results of the processing are not collected and thus out of memory errors are eliminated. -If you do not need to read or modify the entry in any way but would like to execute a task on the member owning the entry with that key (i.e. the member is the partition owner for that key), you can also use `executeOnKeyOwner` provided by https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/core/IExecutorService.html#executeOnKeyOwner-java.lang.Runnable-java.lang.Object-[IExecutorService^]. You need to make sure that the runnable can be serialized (using any of the available serialization techniques in Hazelcast). The runnable will not receive the map entry key or value and is not running on the same thread as operations reading the map data so operations such as `map.get()` or `map.put()` will not be blocked. +If you do not need to read or modify the entry in any way but would like to execute a task on the member owning the entry with that key (i.e. the member is the partition owner for that key), you can also use `executeOnKeyOwner` provided by https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/core/IExecutorService.html#executeOnKeyOwner-java.lang.Runnable-java.lang.Object-[IExecutorService^]. You need to make sure that the runnable can be serialized (using any of the available serialization techniques in Hazelcast). The runnable will not receive the map entry key or value and is not running on the same thread as operations reading the map data so operations such as `map.get()` or `map.put()` will not be blocked. You can also use entry processors to remove entries from your map simply by setting the value(s) of a single entry or multiple entries to `null`. See the following @@ -110,7 +110,7 @@ entry.setValue(newValue, 10, TimeUnit.SECONDS); ---- The interface also provides the ability to update the entry value without changing the entry's TTL. -See the https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/map/ExtendedMapEntry.html[interface documentation] for method descriptions. +See the https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/map/ExtendedMapEntry.html[interface documentation] for method descriptions. == Creating an Entry Processor @@ -179,10 +179,10 @@ In this case the threading looks as follows: . execution thread (process(entry) method) . partition thread (set new value & unlock key, or just unlock key if the entry has not been modified) -The method `getExecutorName()` method may also return two constants defined in the https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/core/Offloadable.html[Offloadable interface^]: +The method `getExecutorName()` method may also return two constants defined in the https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/core/Offloadable.html[Offloadable interface^]: -* https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/core/Offloadable.html#NO_OFFLOADING[`NO_OFFLOADING`]: Processing is not offloaded if the method `getExecutorName()` returns this constant; it is executed as if it does not implement the `Offloadable` interface. -* https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/core/Offloadable.html#OFFLOADABLE_EXECUTOR[`OFFLOADABLE_EXECUTOR`]: Processing is offloaded to the default `ExecutionService.OFFLOADABLE_EXECUTOR`. +* https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/core/Offloadable.html#NO_OFFLOADING[`NO_OFFLOADING`]: Processing is not offloaded if the method `getExecutorName()` returns this constant; it is executed as if it does not implement the `Offloadable` interface. +* https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/core/Offloadable.html#OFFLOADABLE_EXECUTOR[`OFFLOADABLE_EXECUTOR`]: Processing is offloaded to the default `ExecutionService.OFFLOADABLE_EXECUTOR`. Note that if the method `getExecutorName()` cannot find an executor whose name matches the one called by this method, then the default executor service is used. Here is the configuration for the "default" executor: diff --git a/docs/modules/data-structures/pages/listening-for-map-entries.adoc b/docs/modules/data-structures/pages/listening-for-map-entries.adoc index 5c38d3f3e..c536a6195 100644 --- a/docs/modules/data-structures/pages/listening-for-map-entries.adoc +++ b/docs/modules/data-structures/pages/listening-for-map-entries.adoc @@ -249,7 +249,7 @@ It is not strictly necessary, but it is a good idea to also implement `equals()` The map API has two methods for adding and removing an interceptor to the map: `addInterceptor` and `removeInterceptor`. See also the -https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/map/MapInterceptor.html[`MapInterceptor` interface^] +https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/map/MapInterceptor.html[`MapInterceptor` interface^] to learn about the methods used to intercept the changes in a map. Methods available within the `MapInterceptor` interface: diff --git a/docs/modules/data-structures/pages/replicated-map.adoc b/docs/modules/data-structures/pages/replicated-map.adoc index 11ff60f2f..af163fa32 100644 --- a/docs/modules/data-structures/pages/replicated-map.adoc +++ b/docs/modules/data-structures/pages/replicated-map.adoc @@ -284,7 +284,7 @@ protection checks: **Configuring Split-Brain Protection** Split-brain protection for Replicated Map can be configured programmatically -using the method https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/config/ReplicatedMapConfig.html[setSplitBrainProtectionName()^], +using the method https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/config/ReplicatedMapConfig.html[setSplitBrainProtectionName()^], or declaratively using the element `split-brain-protection-ref`. Following is an example declarative configuration: [tabs] diff --git a/docs/modules/data-structures/pages/vector-collections.adoc b/docs/modules/data-structures/pages/vector-collections.adoc index 5e477eb57..f1384710f 100644 --- a/docs/modules/data-structures/pages/vector-collections.adoc +++ b/docs/modules/data-structures/pages/vector-collections.adoc @@ -272,7 +272,7 @@ While recovering from a split-brain scenario, Vector Collection in the small cluster merges into the bigger cluster based on a configured merge policy. The merge policy resolves conflicts with different out-of-the-box strategies. It can be configured programmatically using the method -https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/config/vector/VectorCollectionConfig.html#setMergePolicyConfig(com.hazelcast.config.MergePolicyConfig)[setMergePolicyConfig()^], +https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/config/vector/VectorCollectionConfig.html#setMergePolicyConfig(com.hazelcast.config.MergePolicyConfig)[setMergePolicyConfig()^], or declaratively using the element `merge-policy`. The following example shows declarative configuration: diff --git a/docs/modules/deploy/pages/enterprise-licenses.adoc b/docs/modules/deploy/pages/enterprise-licenses.adoc index 975345c91..52c91701f 100644 --- a/docs/modules/deploy/pages/enterprise-licenses.adoc +++ b/docs/modules/deploy/pages/enterprise-licenses.adoc @@ -72,7 +72,7 @@ hazelcast: Java:: + -- -Add your license to the link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/config/Config.html#setLicenseKey-java.lang.String-[`setLicenseKey()`] method. +Add your license to the link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/config/Config.html#setLicenseKey-java.lang.String-[`setLicenseKey()`] method. [source,java] ---- diff --git a/docs/modules/events/pages/object-events.adoc b/docs/modules/events/pages/object-events.adoc index dcd22c7c9..6cd749cb8 100644 --- a/docs/modules/events/pages/object-events.adoc +++ b/docs/modules/events/pages/object-events.adoc @@ -510,7 +510,7 @@ The `ReliableMessageListener` also copes with exceptions using the `isTerminal(T **Global Order** -The `ReliableMessageListener` always gets all events in order (global order). It does not get duplicates, and gaps (loss of messages) only occur when it is too slow. For more information on dealing with message loss, refer to the `isLossTolerant()` method in the https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/topic/ReliableMessageListener.html#isLossTolerant()[Java API documentation^]. +The `ReliableMessageListener` always gets all events in order (global order). It does not get duplicates, and gaps (loss of messages) only occur when it is too slow. For more information on dealing with message loss, refer to the `isLossTolerant()` method in the https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/topic/ReliableMessageListener.html#isLossTolerant()[Java API documentation^]. **Delivery Guarantees** diff --git a/docs/modules/extending-hazelcast/pages/discovery-spi.adoc b/docs/modules/extending-hazelcast/pages/discovery-spi.adoc index c75c19963..4900d324d 100644 --- a/docs/modules/extending-hazelcast/pages/discovery-spi.adoc +++ b/docs/modules/extending-hazelcast/pages/discovery-spi.adoc @@ -330,7 +330,7 @@ hazelcast: ==== To find out further details, please have a look at the -https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/spi/discovery/package-summary.html[Discovery SPI Javadoc]. +https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/spi/discovery/package-summary.html[Discovery SPI Javadoc]. == DiscoveryService (Framework integration) diff --git a/docs/modules/integrate/pages/custom-connectors.adoc b/docs/modules/integrate/pages/custom-connectors.adoc index 65a09357e..eebd4adec 100644 --- a/docs/modules/integrate/pages/custom-connectors.adoc +++ b/docs/modules/integrate/pages/custom-connectors.adoc @@ -2,9 +2,9 @@ If Hazelcast doesn't natively support the data source/sink you need, you can build a connector for it yourself by using the -link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/pipeline/SourceBuilder.html[SourceBuilder] +link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/pipeline/SourceBuilder.html[SourceBuilder] and -link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/pipeline/SinkBuilder.html[SinkBuilder]. +link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/pipeline/SinkBuilder.html[SinkBuilder]. == SourceBuilder diff --git a/docs/modules/integrate/pages/elasticsearch-connector.adoc b/docs/modules/integrate/pages/elasticsearch-connector.adoc index c82a5bf88..0d46e7b5d 100644 --- a/docs/modules/integrate/pages/elasticsearch-connector.adoc +++ b/docs/modules/integrate/pages/elasticsearch-connector.adoc @@ -9,7 +9,7 @@ This connector is included in the full distribution of Hazelcast. To use this connector in the slim distribution, you must have the `hazelcast-jet-elasticsearch-7` module on your members' classpaths. -Each module includes an Elasticsearch client that's compatible with the given major version of Elasticsearch. The connector API is the same between different versions, apart from a few minor differences where we surface the API of Elasticsearch client. See the link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/elastic/ElasticSources.html[Javadoc] for any such differences. +Each module includes an Elasticsearch client that's compatible with the given major version of Elasticsearch. The connector API is the same between different versions, apart from a few minor differences where we surface the API of Elasticsearch client. See the link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/elastic/ElasticSources.html[Javadoc] for any such differences. == Permissions [.enterprise]*{enterprise-product-name}* diff --git a/docs/modules/integrate/pages/kafka-connector.adoc b/docs/modules/integrate/pages/kafka-connector.adoc index e4e102256..f2385eca8 100644 --- a/docs/modules/integrate/pages/kafka-connector.adoc +++ b/docs/modules/integrate/pages/kafka-connector.adoc @@ -75,7 +75,7 @@ committing using `enable.auto.commit` and configuring link:https://kafka.apache.org/22/documentation.html[Kafka documentation] for the descriptions of these properties. -You can also explicitly specify exact initial offsets for the Kafka source using https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/kafka/KafkaSources.html#kafka(java.util.Properties,com.hazelcast.function.FunctionEx,com.hazelcast.jet.kafka.TopicsConfig)[`TopicsConfig` parameter^]. +You can also explicitly specify exact initial offsets for the Kafka source using https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/kafka/KafkaSources.html#kafka(java.util.Properties,com.hazelcast.function.FunctionEx,com.hazelcast.jet.kafka.TopicsConfig)[`TopicsConfig` parameter^]. Note that initial offsets provided in `topicConfig` will always have priority over offsets stored in Kafka or associated with a given consumer group. Those offsets are used only when the job is started for the first time after submission. Afterwards, the regular fault tolerance mechanism described above is used. diff --git a/docs/modules/integrate/pages/legacy-cdc-connectors.adoc b/docs/modules/integrate/pages/legacy-cdc-connectors.adoc index 662789fcf..df5ba7295 100644 --- a/docs/modules/integrate/pages/legacy-cdc-connectors.adoc +++ b/docs/modules/integrate/pages/legacy-cdc-connectors.adoc @@ -25,12 +25,12 @@ This connector is included in the full distribution of Hazelcast {open-source-pr We have the following types of CDC sources: -* link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/cdc/DebeziumCdcSources.html[DebeziumCdcSources, window=_blank]: +* link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/cdc/DebeziumCdcSources.html[DebeziumCdcSources, window=_blank]: a generic source for all databases supported by Debezium -* link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/cdc/mysql/MySqlCdcSources.html[MySqlCdcSources, window=_blank]: +* link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/cdc/mysql/MySqlCdcSources.html[MySqlCdcSources, window=_blank]: a specific, first class Jet CDC source for MySQL databases (also based on Debezium, but with the additional benefits provided by Hazelcast) -* link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/cdc/postgres/PostgresCdcSources.html[PostgresCdcSources, window=_blank]: +* link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/cdc/postgres/PostgresCdcSources.html[PostgresCdcSources, window=_blank]: a specific, first class CDC source for PostgreSQL databases (also based on Debezium, but with the additional benefits provided by Hazelcast) diff --git a/docs/modules/jcache/pages/icache.adoc b/docs/modules/jcache/pages/icache.adoc index edd64ab7f..6e50b077c 100644 --- a/docs/modules/jcache/pages/icache.adoc +++ b/docs/modules/jcache/pages/icache.adoc @@ -1044,7 +1044,7 @@ destroyed and cache's data is removed. * `getLocalCacheStatistics()`: Returns a `com.hazelcast.cache.CacheStatistics` instance, both on Hazelcast members and clients, providing the same statistics data as the JMX beans. -See the https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/cache/ICache.html[ICache Javadoc^] +See the https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/cache/ICache.html[ICache Javadoc^] to see all the methods provided by ICache. === Implementing BackupAwareEntryProcessor diff --git a/docs/modules/maintain-cluster/pages/cluster-member-states.adoc b/docs/modules/maintain-cluster/pages/cluster-member-states.adoc index 422c72c3d..bea7049a7 100644 --- a/docs/modules/maintain-cluster/pages/cluster-member-states.adoc +++ b/docs/modules/maintain-cluster/pages/cluster-member-states.adoc @@ -108,7 +108,7 @@ When the graceful shutdown process is completed, the member's state changes to ` To change a cluster's state, you can use one of the following: -- The https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/cluster/Cluster.html[`changeClusterState()` method] +- The https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/cluster/Cluster.html[`changeClusterState()` method] - xref:{page-latest-supported-mc}@management-center:monitor-imdg:cluster-administration.adoc#cluster-state[Management Center] - The xref:management:cluster-utilities.adoc#example-usages-for-hz-cluster-admin[`hz-cluster-admin` script] diff --git a/docs/modules/maintain-cluster/pages/monitoring.adoc b/docs/modules/maintain-cluster/pages/monitoring.adoc index 13a1096ae..608907adf 100644 --- a/docs/modules/maintain-cluster/pages/monitoring.adoc +++ b/docs/modules/maintain-cluster/pages/monitoring.adoc @@ -559,16 +559,16 @@ https://prometheus.io/docs/prometheus/latest/getting_started[Prometheus website] ==== Via Job API The `Job` class has a -https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/Job.html#getMetrics--[`getMetrics()`] +https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/Job.html#getMetrics--[`getMetrics()`] method which returns a -https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/core/metrics/JobMetrics.html[JobMetrics] +https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/core/metrics/JobMetrics.html[JobMetrics] instance. It contains the latest known metric values for the job. This functionality has been developed primarily to give access to metrics of finished jobs, but can in fact be used for jobs in any state. For details on how to use and filter the metric values consult the -https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/core/metrics/JobMetrics.html[JobMetrics API docs]. +https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/core/metrics/JobMetrics.html[JobMetrics API docs]. A simple example for computing the number of data items emitted by a certain vertex (let’s call it vertexA), excluding items emitted to the snapshot, would look like this: @@ -744,7 +744,7 @@ The following are some of the metrics that you can access via the `LocalMapStats * Number of queries executed on the map (`getQueryCount()` and `getIndexedQueryCount()`) (it may be imprecise for queries involving partition predicates (`PartitionPredicate`) on the off-heap storage). -See the https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/map/LocalMapStats.html[`LocalMapStats` Javadoc^] to see all the metrics. +See the https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/map/LocalMapStats.html[`LocalMapStats` Javadoc^] to see all the metrics. === Map Index Statistics @@ -776,7 +776,7 @@ To compute an average latency divide the returned value by the number of operati this memory cost metric value is a best-effort approximation and doesn't indicate a precise on-heap memory usage of an index. -See the https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/query/LocalIndexStats.html[`LocalIndexStats` Javadoc^] to see all the metrics. +See the https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/query/LocalIndexStats.html[`LocalIndexStats` Javadoc^] to see all the metrics. To compute an aggregated value of `getAverageHitSelectivity()` for all cluster members, you can use a simple averaging computation as shown below: @@ -830,7 +830,7 @@ the `NearCacheStats` object (applies to both client and member Near Caches): * memory cost (number of bytes) of owned entries in the Near Cache (`getOwnedEntryMemoryCost()`) * number of hits (reads) of the locally owned entries (`getHits()`) -See the https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/nearcache/NearCacheStats.html[`NearCacheStats` Javadoc^] to see all the metrics. +See the https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/nearcache/NearCacheStats.html[`NearCacheStats` Javadoc^] to see all the metrics. === MultiMap Statistics @@ -857,7 +857,7 @@ the `LocalMultiMapStats` object: * number of get and put operations on the map (`getPutOperationCount()` and `getGetOperationCount()`) -See the https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/multimap/LocalMultiMapStats.html[`LocalMultiMapStats` Javadoc^] to see all the metrics. +See the https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/multimap/LocalMultiMapStats.html[`LocalMultiMapStats` Javadoc^] to see all the metrics. === Queue Statistics @@ -880,7 +880,7 @@ the `LocalQueueStats ` object: * minimum and maximum ages of the items in the member (`getMinAge()` and `getMaxAge()`) * number of offer, put and add operations (`getOfferOperationCount()`) -See the https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/collection/LocalQueueStats.html[`LocalQueueStats` Javadoc^] to see all the metrics. +See the https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/collection/LocalQueueStats.html[`LocalQueueStats` Javadoc^] to see all the metrics. === Topic Statistics @@ -901,7 +901,7 @@ The following are the metrics that you can access via the `LocalTopicStats ` obj * total number of published messages of the topic on the member (`getPublishOperationCount()`) * total number of received messages of the topic on the member (`getReceiveOperationCount()`) -See the https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/topic/LocalTopicStats.html[`LocalTopicStats` Javadoc^] to see all the metrics. +See the https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/topic/LocalTopicStats.html[`LocalTopicStats` Javadoc^] to see all the metrics. === Executor Statistics @@ -924,7 +924,7 @@ the `LocalExecutorStats ` object: * number of started operations of the executor service (`getStartedTaskCount()`) * number of completed operations of the executor service (`getCompletedTaskCount()`) -See the https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/executor/LocalExecutorStats.html[`LocalExecutorStats` Javadoc^] to see all the metrics. +See the https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/executor/LocalExecutorStats.html[`LocalExecutorStats` Javadoc^] to see all the metrics. == Health Check and Monitoring diff --git a/docs/modules/mapstore/pages/implement-a-mapstore.adoc b/docs/modules/mapstore/pages/implement-a-mapstore.adoc index a72071508..543250817 100644 --- a/docs/modules/mapstore/pages/implement-a-mapstore.adoc +++ b/docs/modules/mapstore/pages/implement-a-mapstore.adoc @@ -5,7 +5,7 @@ == Differences Between MapLoader and MapStore -The link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/map/MapStore.html[`MapStore`] interface extends the link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/map/MapLoader.html[`MapLoader`] interface. Therefore, all methods and configuration parameters of the `MapLoader` are also available on the `MapStore` interface. +The link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/map/MapStore.html[`MapStore`] interface extends the link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/map/MapLoader.html[`MapLoader`] interface. Therefore, all methods and configuration parameters of the `MapLoader` are also available on the `MapStore` interface. If you only want to load data from external systems into a map, use the `MapLoader` interface. If you also want to save map entries to an external system, use the `MapStore` interface. @@ -23,7 +23,7 @@ If you only want to load data from external systems into a map, use the `MapLoad [[managing-the-lifecycle-of-a-mapLoader]] == Connecting to an External System -To connect to an external system, you must configure a connection to it, using either a third-party library or a JDBC driver in the `init()` method of the link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/map/MapLoaderLifecycleSupport.html[`MapLoaderlifeCycleSupport`] implementation. +To connect to an external system, you must configure a connection to it, using either a third-party library or a JDBC driver in the `init()` method of the link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/map/MapLoaderLifecycleSupport.html[`MapLoaderlifeCycleSupport`] implementation. The external system that you choose must be a centralized system that is accessible to all Hazelcast members. Persistence to a local file system is not supported. diff --git a/docs/modules/mapstore/pages/mapstore-triggers.adoc b/docs/modules/mapstore/pages/mapstore-triggers.adoc index d4a00ccd8..50a79e0a8 100644 --- a/docs/modules/mapstore/pages/mapstore-triggers.adoc +++ b/docs/modules/mapstore/pages/mapstore-triggers.adoc @@ -3,7 +3,7 @@ {description} -NOTE: If the `initial-mode` configuration is set to `LAZY`, the first time any link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/map/IMap.html[map method] +NOTE: If the `initial-mode` configuration is set to `LAZY`, the first time any link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/map/IMap.html[map method] is called, it triggers the `MapLoader.loadAllKeys()` method. [cols="1m,5a"] @@ -78,4 +78,4 @@ executeOnAllEntries() |=== More information about the behavior of IMap method calls and their relationship to `MapStore` methods can be found in the -https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/map/IMap.html[IMap Javadocs]. +https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/map/IMap.html[IMap Javadocs]. diff --git a/docs/modules/migrate/pages/upgrading-from-imdg-3.adoc b/docs/modules/migrate/pages/upgrading-from-imdg-3.adoc index f383435ec..a514f2d8c 100644 --- a/docs/modules/migrate/pages/upgrading-from-imdg-3.adoc +++ b/docs/modules/migrate/pages/upgrading-from-imdg-3.adoc @@ -115,7 +115,7 @@ degradation. == Removing Deprecated Client Configurations -The following methods of https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/client/config/ClientConfig.html[ClientConfig^] have been refactored: +The following methods of https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/client/config/ClientConfig.html[ClientConfig^] have been refactored: * `addNearCacheConfig(String, NearCacheConfig)` -> `addNearCacheConfig(NearCacheConfig)` * `setSmartRouting(boolean)` -> `getNetworkConfig().setSmartRouting(boolean);` @@ -133,7 +133,7 @@ The following methods of https://docs.hazelcast.org/docs/{full-version}/javadoc/ * `setSocketOptions()` -> `getNetworkConfig().setSocketOptions(SocketOptions);` * `getNetworkConfig().setAwsConfig(new ClientAwsConfig());` -> `getNetworkConfig().setAwsConfig(new AwsConfig());` -Also, the `ClientAwsConfig` class has been renamed as https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/config/AwsConfig.html[`AwsConfig`] +Also, the `ClientAwsConfig` class has been renamed as https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/config/AwsConfig.html[`AwsConfig`] The naming for the declarative configuration elements have not been changed. @@ -423,8 +423,8 @@ mapConfig.addAttributeConfig(attributeConfig); Also, some custom query attribute classes were previously abstract classes with one abstract method. They have been converted into functional interfaces: -* https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/query/extractor/ValueCallback.html[ValueCallback^] -* https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/query/extractor/ValueExtractor.html[ValueExtractor^] +* https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/query/extractor/ValueCallback.html[ValueCallback^] +* https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/query/extractor/ValueExtractor.html[ValueExtractor^] [cols="1a,1a"] |=== @@ -615,7 +615,7 @@ public class ClusterMigrationListener implements MigrationListener { |=== -See the https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/partition/MigrationListener.html[MigrationListener^] Javadoc +See the https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/partition/MigrationListener.html[MigrationListener^] Javadoc for a full insight. == Defaulting to OpenSSL @@ -822,7 +822,7 @@ Also, `DefaultPermissionPolicy` which was consuming `ClusterPrincipal` and also reading the endpoint address from it works with the new `ClusterRolePrincipals` and `ClusterEndpointPrincipals` principal types. -See the following table for the before/after sample https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/security/IPermissionPolicy.html[IPermissionPolicy^] implementations. +See the following table for the before/after sample https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/security/IPermissionPolicy.html[IPermissionPolicy^] implementations. [cols="1a,1a"] |=== @@ -1157,7 +1157,7 @@ See the xref:network-partitioning:split-brain-protection.adoc[Split-Brain Protec == Renaming getID to getClassId in IdentifiedDataSerializable -The `getId()` method of the https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/nio/serialization/IdentifiedDataSerializable.html[IdentifiedDataSerializable^] interface +The `getId()` method of the https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/nio/serialization/IdentifiedDataSerializable.html[IdentifiedDataSerializable^] interface is a method with a common name, meaning a naming conflict would happen frequently. For example, database entities also have a `getId()` method. Therefore, it has been renamed as `getClassId()`. @@ -1208,7 +1208,7 @@ on `IdentifiedDataSerializable`. === Entry Processor The `EntryBackupProcessor` interface has been removed in favor -of https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/map/EntryProcessor.html[EntryProcessor^] which now defines how the entries will be processed +of https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/map/EntryProcessor.html[EntryProcessor^] which now defines how the entries will be processed both on the primary and the backup replicas. Because of this, the `AbstractEntryProcessor` interface has been removed. @@ -1285,7 +1285,7 @@ Introduces interfaces with single abstract method which declares a checked exception. The interfaces are also `Serializable` and can be readily used when providing a lambda which is then serialized. -The https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/projection/Projection.html[Projection^] class was an abstract interface for historical reasons. +The https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/projection/Projection.html[Projection^] class was an abstract interface for historical reasons. It has been turned into a functional interface so it's more lambda-friendly. See the following table for the before/after sample implementations. @@ -1961,7 +1961,7 @@ The following deprecated methods have been removed: * `getServiceImpl()`, replaced by `getImplementation()`. * `setServiceImpl(Object)`, replaced by `setImplementation(Object)`. -See the https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/config/ServiceConfig.html[here^] +See the https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/config/ServiceConfig.html[here^] for ``ServiceConfig``s Javadoc. == Removal of Deprecations in `TransactionContext` @@ -1969,7 +1969,7 @@ for ``ServiceConfig``s Javadoc. Deprecated `getXaResource()` method has been removed. `HazelcastInstance.getXAResource()` should be used instead. -See the https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/core/HazelcastInstance.html[here^] +See the https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/core/HazelcastInstance.html[here^] for ``HazelcastInstance``s Javadoc. == Removal of Deprecations in `DistributedObjectEvent` @@ -1977,7 +1977,7 @@ for ``HazelcastInstance``s Javadoc. Deprecated `getObjectId()` method has been removed, `getObjectName()` should be used instead. -See the https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/core/DistributedObjectEvent.html[here^] +See the https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/core/DistributedObjectEvent.html[here^] for ``DistributedObjectEvents``s Javadoc. == Removal of Deprecated `EntryListener`-based Listener API in `IMap` diff --git a/docs/modules/migrate/pages/upgrading-from-jet.adoc b/docs/modules/migrate/pages/upgrading-from-jet.adoc index 135634025..d40a149ad 100644 --- a/docs/modules/migrate/pages/upgrading-from-jet.adoc +++ b/docs/modules/migrate/pages/upgrading-from-jet.adoc @@ -68,7 +68,7 @@ The following methods for configuring the underlying IMDG instance of Jet have b - `configureHazelcast()` - `setHazelcastConfig()` -If you used the `JetConfig` object to configure IMDG, you should replace instances of `JetConfig` with link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/config/Config.html[`Config`]. +If you used the `JetConfig` object to configure IMDG, you should replace instances of `JetConfig` with link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/config/Config.html[`Config`]. [tabs] ==== @@ -154,7 +154,7 @@ All configuration loader methods have been moved from the `JetConfig` object to === Jet Properties -In the Java API, properties in the `JetProperties` object have been merged into the link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/spi/properties/ClusterProperty.html[`ClusterProperty` object]. +In the Java API, properties in the `JetProperties` object have been merged into the link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/spi/properties/ClusterProperty.html[`ClusterProperty` object]. The following Jet properties have been removed: @@ -202,10 +202,10 @@ the cluster will not start. You need to either edit and update your YAML configu == API Entry Points The `Jet` class, which was the main entry point of Jet 4.x, -has been deprecated and replaced by the link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/core/HazelcastInstance.html[`HazelcastInstance` class]. +has been deprecated and replaced by the link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/core/HazelcastInstance.html[`HazelcastInstance` class]. The `JetInstance` class, which -represented an instance of a Jet member or client has been deprecated and replaced by the link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/JetService.html[`JetService` class]. To access Jet related services, you should now use the link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/core/HazelcastInstance.html#getJet--[`HazelcastInstance.getJet()` method] to get an instance of the `JetService` object. +represented an instance of a Jet member or client has been deprecated and replaced by the link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/JetService.html[`JetService` class]. To access Jet related services, you should now use the link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/core/HazelcastInstance.html#getJet--[`HazelcastInstance.getJet()` method] to get an instance of the `JetService` object. [tabs] ==== diff --git a/docs/modules/pipelines/pages/cdc-database-setup.adoc b/docs/modules/pipelines/pages/cdc-database-setup.adoc index 6ae191afc..62c37b860 100644 --- a/docs/modules/pipelines/pages/cdc-database-setup.adoc +++ b/docs/modules/pipelines/pages/cdc-database-setup.adoc @@ -183,7 +183,7 @@ user locally or on `localhost`, using IPv4 or IPv6. == Other Databases Streaming CDC data from other databases supported by Debezium is -possible in Hazelcast by using the link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/cdc/DebeziumCdcSources.html[generic Debezium source]. +possible in Hazelcast by using the link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/cdc/DebeziumCdcSources.html[generic Debezium source]. This deployment guide however only covers the databases we have first class support for. See the following documentation for the other databases: @@ -277,7 +277,7 @@ The general behaviour of the MySQL connector when loosing connection to the database is governed by a configurable reconnect strategy and a boolean flag specifying if state should be reset on reconnects or not. For details see the -link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/cdc/mysql/MySqlCdcSources.html[javadoc]. +link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/cdc/mysql/MySqlCdcSources.html[javadoc]. There are however some discrepancies and peculiarities in the behavior. @@ -330,7 +330,7 @@ The general behaviour of the PostgreSQL connector when losing connection to the database is governed by a configurable reconnect strategy and a boolean flag specifying if state should be reset on reconnects or not. For details see the -link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/cdc/postgres/PostgresCdcSources.html[javadoc]. +link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/cdc/postgres/PostgresCdcSources.html[javadoc]. There are however some peculiarities to the behaviour. diff --git a/docs/modules/pipelines/pages/configuring-jobs.adoc b/docs/modules/pipelines/pages/configuring-jobs.adoc index f68eec39f..b11310aba 100644 --- a/docs/modules/pipelines/pages/configuring-jobs.adoc +++ b/docs/modules/pipelines/pages/configuring-jobs.adoc @@ -41,7 +41,7 @@ a|Depends on the source or sink. |suspendOnFailure |Sets what happens when a job execution fails. -- If enabled, the job will be suspended on failure. A snapshot of the job's computational state will be preserved. You can update the configuration of a suspended job and resume it link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/config/DeltaJobConfig.html[programmatically] or xref:sql:alter-job.adoc[using SQL]. +- If enabled, the job will be suspended on failure. A snapshot of the job's computational state will be preserved. You can update the configuration of a suspended job and resume it link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/config/DeltaJobConfig.html[programmatically] or xref:sql:alter-job.adoc[using SQL]. - If disabled, the job will be terminated and the job snapshots will be deleted. |`boolean` |true diff --git a/docs/modules/pipelines/pages/custom-aggregate-operation.adoc b/docs/modules/pipelines/pages/custom-aggregate-operation.adoc index f47e4f1e0..888b66396 100644 --- a/docs/modules/pipelines/pages/custom-aggregate-operation.adoc +++ b/docs/modules/pipelines/pages/custom-aggregate-operation.adoc @@ -7,9 +7,9 @@ _aggregate function_. A basic example is `sum` applied to a set of integer numbers, but the result can also be a complex value, for example a list of all the input items. -The Jet API contains a range of link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/aggregate/AggregateOperations.html[predefined aggregate functions], +The Jet API contains a range of link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/aggregate/AggregateOperations.html[predefined aggregate functions], but it also exposes an abstraction, called -link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/aggregate/AggregateOperation.html[`AggregateOperation`], +link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/aggregate/AggregateOperation.html[`AggregateOperation`], that allows you to plug in your own. Since Hazelcast does the aggregation in a parallelized and distributed way, you can't simply supply a piece of Java code that implements the aggregate function; we need you to break @@ -79,9 +79,9 @@ computation performed on it. This means that you just need one accumulator class for each kind of structure that holds the accumulated data, as opposed to one for each aggregate operation. The Jet API offers in the -link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/accumulator/package-summary.html[`com.hazelcast.jet.accumulator`] +link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/accumulator/package-summary.html[`com.hazelcast.jet.accumulator`] package several such classes, one of them being -link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/accumulator/LongLongAccumulator.html[`LongLongAccumulator`], +link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/accumulator/LongLongAccumulator.html[`LongLongAccumulator`], which is a match for our `average` function. You'll just have to supply the logic on top of it. @@ -199,7 +199,7 @@ have to resort to the less type-safe, general `AggregateOperation`. Hazelcast can join several streams and simultaneously perform aggregation on all them. You specify a separate aggregate operation for each input stream and have the opportunity to combine their results -when done. You can use aggregate operations link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/aggregate/AggregateOperations.html[provided in the library] +when done. You can use aggregate operations link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/aggregate/AggregateOperations.html[provided in the library] (see the section on xref:transforms.adoc#stateful-transforms#co-group--join[co-aggregating] for an example). diff --git a/docs/modules/pipelines/pages/grpc.adoc b/docs/modules/pipelines/pages/grpc.adoc index df4172168..a50dd7366 100644 --- a/docs/modules/pipelines/pages/grpc.adoc +++ b/docs/modules/pipelines/pages/grpc.adoc @@ -250,7 +250,7 @@ jet.grpc.destroy.timeout.seconds jet.grpc.shutdown.timeout.seconds ``` -The link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/grpc/GrpcProperties.html[GrpcProperties] +The link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/grpc/GrpcProperties.html[GrpcProperties] JavaDoc provides more details about these properties. See the link:https://github.com/hazelcast/hazelcast-jet/tree/master/examples/grpc[grpc example] diff --git a/docs/modules/pipelines/pages/observables.adoc b/docs/modules/pipelines/pages/observables.adoc index 349e9f0d3..a6aff3ce0 100644 --- a/docs/modules/pipelines/pages/observables.adoc +++ b/docs/modules/pipelines/pages/observables.adoc @@ -61,7 +61,7 @@ observable.destroy(); == Clean-up Observables are backed by -link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/ringbuffer/Ringbuffer.html[Ringbuffers] +link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/ringbuffer/Ringbuffer.html[Ringbuffers] stored in the cluster which should be cleaned up by the client, once they are no longer necessary. They have a `destroy()` method which does just that. If the Observable isn’t destroyed, its memory will be leaked diff --git a/docs/modules/pipelines/pages/overview.adoc b/docs/modules/pipelines/pages/overview.adoc index d3cd78e10..d564b27af 100644 --- a/docs/modules/pipelines/pages/overview.adoc +++ b/docs/modules/pipelines/pages/overview.adoc @@ -48,7 +48,7 @@ A pipeline without any sinks is not valid. To process or enrich data in between reading it from a source and sending it to a sink, you can use transforms. -In Hazelcast, pipelines can be defined using either xref:learn-sql.adoc[SQL] or the link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/pipeline/Pipeline.html[Jet API]. +In Hazelcast, pipelines can be defined using either xref:learn-sql.adoc[SQL] or the link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/pipeline/Pipeline.html[Jet API]. == Extracting and Ingesting Data diff --git a/docs/modules/pipelines/pages/serialization.adoc b/docs/modules/pipelines/pages/serialization.adoc index 5aef76e70..86923163f 100644 --- a/docs/modules/pipelines/pages/serialization.adoc +++ b/docs/modules/pipelines/pages/serialization.adoc @@ -148,8 +148,8 @@ types: - link:https://docs.oracle.com/javase/8/docs/api/java/io/Serializable.html[java.io.Serializable] - link:https://docs.oracle.com/javase/8/docs/api/java/io/Externalizable.html[java.io.Externalizable] -- link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/nio/serialization/Portable.html[com.hazelcast.nio.serialization.Portable] -- link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/nio/serialization/StreamSerializer.html[com.hazelcast.nio.serialization.StreamSerializer] +- link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/nio/serialization/Portable.html[com.hazelcast.nio.serialization.Portable] +- link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/nio/serialization/StreamSerializer.html[com.hazelcast.nio.serialization.StreamSerializer] The following table provides a comparison between them to help you in deciding which interface to use in your applications. @@ -220,7 +220,7 @@ not to mention very wasteful with memory. For the best performance and simplest implementation we recommend using the Hazelcast -link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/nio/serialization/StreamSerializer.html[StreamSerializer] +link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/nio/serialization/StreamSerializer.html[StreamSerializer] mechanism. Here is a sample implementation for a `Person` class: ```java diff --git a/docs/modules/pipelines/pages/transforms.adoc b/docs/modules/pipelines/pages/transforms.adoc index a1dbd967c..7c07f7d4d 100644 --- a/docs/modules/pipelines/pages/transforms.adoc +++ b/docs/modules/pipelines/pages/transforms.adoc @@ -444,10 +444,10 @@ topN() |=== For a complete list, please refer to the -link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/aggregate/AggregateOperations.html[AggregateOperations] +link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/aggregate/AggregateOperations.html[AggregateOperations] class. You can also implement your own aggregate operations using the builder in -link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/aggregate/AggregateOperation.html[AggregateOperation] +link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/aggregate/AggregateOperation.html[AggregateOperation] . === groupingKey diff --git a/docs/modules/query/pages/predicate-overview.adoc b/docs/modules/query/pages/predicate-overview.adoc index 70fda3e23..1900f96ef 100644 --- a/docs/modules/query/pages/predicate-overview.adoc +++ b/docs/modules/query/pages/predicate-overview.adoc @@ -128,7 +128,7 @@ expression. NOTE: See the -https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/query/Predicates.html[Predicates Javadoc^] +https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/query/Predicates.html[Predicates Javadoc^] for all predicates provided. === Combining Predicates with AND, OR, NOT @@ -394,7 +394,7 @@ easily with the help of the `setPage()` method. This way, if you make a query for the hundredth page, for example, it gets all 100 pages at once instead of reaching the hundredth page one by one using the `nextPage()` method. Note that this feature tires the memory and see the -https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/query/PagingPredicate.html[PagingPredicate Javadoc^]. +https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/query/PagingPredicate.html[PagingPredicate Javadoc^]. NOTE: Paging Predicate, also known as Order & Limit, is not supported in Transactional Context. @@ -548,8 +548,8 @@ In order to return multiple results from a single extraction, invoke the `ValueCollector.collect()` method multiple times, so that the collector collects all results. -See the https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/query/extractor/ValueExtractor.html[ValueExtractor^] and -https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/query/extractor/ValueCollector.html[ValueCollector^] Javadocs. +See the https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/query/extractor/ValueExtractor.html[ValueExtractor^] and +https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/query/extractor/ValueCollector.html[ValueCollector^] Javadocs. NOTE: Custom attributes are compatible with all Hazelcast xref:serialization:comparing-interfaces.adoc[serialization methods]. @@ -571,7 +571,7 @@ results directly to the `ValueCollector`. and grouping the result of the read operation and manually passing it to the `ValueCollector`. -See the https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/query/extractor/ValueReader.html[ValueReader^] Javadoc. +See the https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/query/extractor/ValueReader.html[ValueReader^] Javadoc. ==== Returning Multiple Values from a Single Extraction @@ -1006,7 +1006,7 @@ making the computation fast. Aggregations are available on `com.hazelcast.map.IMap` only. IMap offers the method `aggregate` to apply the aggregation logic on the map entries. This method can be called with or without a predicate. You can refer -to its https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/map/IMap.html#aggregate-com.hazelcast.aggregation.Aggregator-[Javadoc^] +to its https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/map/IMap.html#aggregate-com.hazelcast.aggregation.Aggregator-[Javadoc^] to see the method details. NOTE: If the xref:data-structures:distributed-data-structures.adoc#setting-in-memory-format[in-memory format] of your data is `NATIVE`, @@ -1031,7 +1031,7 @@ These callbacks enable releasing the state that might have been initialized and stored in the Aggregator - to reduce the network traffic. Each phase is described below. See also the -https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/aggregation/Aggregator.html[Aggregator Javadoc^] +https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/aggregation/Aggregator.html[Aggregator Javadoc^] for the API's details. **Accumulation:** @@ -1130,14 +1130,14 @@ Projection API. Projections are available on `com.hazelcast.map.IMap` only. IMap offers the method `project` to apply the projection logic on the map entries. This method can be called with or without a predicate. See its -https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/map/IMap.html#project-com.hazelcast.projection.Projection-[Javadoc^] +https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/map/IMap.html#project-com.hazelcast.projection.Projection-[Javadoc^] to see the method details. === Projection API The Projection API provides the method `transform()` which is called on each result object. Its result is then gathered as the final query result entity. You can refer -to the https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/projection/Projection.html[Projection Javadoc^] +to the https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/projection/Projection.html[Projection Javadoc^] for the API's details. === Example implementation diff --git a/docs/modules/release-notes/pages/5-3-0.adoc b/docs/modules/release-notes/pages/5-3-0.adoc index ad0245be5..6040dde7d 100644 --- a/docs/modules/release-notes/pages/5-3-0.adoc +++ b/docs/modules/release-notes/pages/5-3-0.adoc @@ -83,7 +83,7 @@ https://github.com/hazelcast/hazelcast/pull/23472[#23472], https://github.com/ha https://github.com/hazelcast/hazelcast/pull/23348[#23348] * Introduced `JobStatusListener` as an alternative to retrieve a job status via the `Job.getStatus()` method. https://github.com/hazelcast/hazelcast/pull/23193[#23193] -* Updated the https://docs.hazelcast.org/docs/{page-latest-supported-java-client}/javadoc/com/hazelcast/jet/Job.html#isUserCancelled--[job API] to add the ability +* Updated the https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/Job.html#isUserCancelled--[job API] to add the ability to distinguish the user-cancelled jobs from the failed ones. https://github.com/hazelcast/hazelcast/pull/22924[#22924] * Added `flock` to guard all the concurrent `pip` executions (upgrading `pip` and `protobuf` versions) in the Jet-to-Python script. diff --git a/docs/modules/release-notes/pages/5-4-0.adoc b/docs/modules/release-notes/pages/5-4-0.adoc index 3588885f9..ec356ce12 100644 --- a/docs/modules/release-notes/pages/5-4-0.adoc +++ b/docs/modules/release-notes/pages/5-4-0.adoc @@ -227,7 +227,7 @@ https://github.com/hazelcast/hazelcast/pull/24800[#24800] === API Documentation -* Detailed the existing partition aware interface description to explain the requirements when calculating the partition ID in case partition aware is implemented. See link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/partition/PartitionAware.html[`Interface PartitionAware`]. +* Detailed the existing partition aware interface description to explain the requirements when calculating the partition ID in case partition aware is implemented. See link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/partition/PartitionAware.html[`Interface PartitionAware`]. == Fixes diff --git a/docs/modules/security/pages/logging-auditable-events.adoc b/docs/modules/security/pages/logging-auditable-events.adoc index 7ec19b9ca..abf1e2a75 100644 --- a/docs/modules/security/pages/logging-auditable-events.adoc +++ b/docs/modules/security/pages/logging-auditable-events.adoc @@ -6,7 +6,7 @@ Hazelcast {enterprise-product-name} allows observing some important cluster even using the Auditlog feature. Auditable events have a unique type ID; they contain a timestamp and importance level. The events may also contain a message and parameters. -Supported event type identifiers are listed in https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/auditlog/AuditlogTypeIds.html[`AuditlogTypeIds`^]. +Supported event type identifiers are listed in https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/auditlog/AuditlogTypeIds.html[`AuditlogTypeIds`^]. You can enable the auditlog feature in the configuration as follows: @@ -72,18 +72,18 @@ as log entries with the category name `"hazelcast.auditlog"`. The auditlog has its own SPI allowing you to provide your implementations. Relevant classes and interfaces are located -in the https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/auditlog/package-summary.html[`com.hazelcast.auditlog` package^]. +in the https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/auditlog/package-summary.html[`com.hazelcast.auditlog` package^]. The central point of auditlog SPI is the -https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/auditlog/AuditlogService.html[`AuditlogService` interface^] +https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/auditlog/AuditlogService.html[`AuditlogService` interface^] and its `log(...)` methods. Their implementations are responsible for processing auditable events, e.g., writing them to a database. `AuditlogService` also creates the -https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/auditlog/EventBuilder.html[`EventBuilder`^] +https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/auditlog/EventBuilder.html[`EventBuilder`^] instances which are used to build -https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/auditlog/AuditableEvent.html[`AuditableEvents`^]. +https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/auditlog/AuditableEvent.html[`AuditableEvents`^]. Another important piece in the SPI is the -https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/auditlog/AuditlogServiceFactory.html[`AuditlogServiceFactory` interface^]. +https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/auditlog/AuditlogServiceFactory.html[`AuditlogServiceFactory` interface^]. The factory class allows the `AuditlogService` initialization based on parameters. \ No newline at end of file diff --git a/docs/modules/serialization/pages/implementing-portable-serialization.adoc b/docs/modules/serialization/pages/implementing-portable-serialization.adoc index c34a093f0..adcdea403 100644 --- a/docs/modules/serialization/pages/implementing-portable-serialization.adoc +++ b/docs/modules/serialization/pages/implementing-portable-serialization.adoc @@ -166,7 +166,7 @@ hazelcast: ==== You can also use the interface -https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/nio/serialization/VersionedPortable.html[VersionedPortable^] +https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/nio/serialization/VersionedPortable.html[VersionedPortable^] which enables to upgrade the version per class, instead of global versioning. If you need to update only one class, you can use this interface. In this case, your class should implement `VersionedPortable` instead of `Portable`, diff --git a/docs/modules/serialization/pages/serializing-json.adoc b/docs/modules/serialization/pages/serializing-json.adoc index 8f91245f8..79d81d865 100644 --- a/docs/modules/serialization/pages/serializing-json.adoc +++ b/docs/modules/serialization/pages/serializing-json.adoc @@ -7,7 +7,7 @@ == Serializing into `HazelcastJsonValue` -To serialize a JSON string into `HazelcastJsonValue`, pass the string directly to the link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/core/HazelcastJsonValue.html[`new HazelcastJsonValue()`] constructor. +To serialize a JSON string into `HazelcastJsonValue`, pass the string directly to the link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/core/HazelcastJsonValue.html[`new HazelcastJsonValue()`] constructor. NOTE: Hazelcast does not validate the given string. It is your responsibility to use a well-formed JSON string as a `HazelcastJsonValue`. diff --git a/docs/modules/spring/pages/transaction-manager.adoc b/docs/modules/spring/pages/transaction-manager.adoc index 0c46e09fa..858ba4741 100644 --- a/docs/modules/spring/pages/transaction-manager.adoc +++ b/docs/modules/spring/pages/transaction-manager.adoc @@ -1,7 +1,7 @@ = Configuring Hazelcast Transaction Manager You can get rid of the boilerplate code to begin, commit or rollback -transactions by using https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/spring/transaction/HazelcastTransactionManager.html[HazelcastTransactionManager^] +transactions by using https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/spring/transaction/HazelcastTransactionManager.html[HazelcastTransactionManager^] which is a `PlatformTransactionManager` implementation to be used with Spring Transaction API. @@ -9,7 +9,7 @@ with Spring Transaction API. You need to register `HazelcastTransactionManager` as your transaction manager implementation and also you need to -register https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/spring/transaction/ManagedTransactionalTaskContext.html[ManagedTransactionalTaskContext^] +register https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/spring/transaction/ManagedTransactionalTaskContext.html[ManagedTransactionalTaskContext^] to access transactional data structures within your service class. diff --git a/docs/modules/sql/pages/alter-job.adoc b/docs/modules/sql/pages/alter-job.adoc index dbbfbab04..64c754ee2 100644 --- a/docs/modules/sql/pages/alter-job.adoc +++ b/docs/modules/sql/pages/alter-job.adoc @@ -29,7 +29,7 @@ The `job_name` parameter is required. | |`SUSPEND` -|Suspend the job. For details, see the API reference for the link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/Job.html#suspend()[`suspend()`] method. +|Suspend the job. For details, see the API reference for the link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/Job.html#suspend()[`suspend()`] method. |<> |`OPTIONS` @@ -44,15 +44,15 @@ The `job_name` parameter is required. - `suspendOnFailure` - `timeoutMillis` -See xref:pipelines:configuring-jobs.adoc#job-configuration-options[job configuration options] for valid values for each of the listed parameters. For more details, see the API for the Job interface: link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/config/DeltaJobConfig.html[`updateConfig(DeltaJobConfig deltaConfig)`]. +See xref:pipelines:configuring-jobs.adoc#job-configuration-options[job configuration options] for valid values for each of the listed parameters. For more details, see the API for the Job interface: link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/config/DeltaJobConfig.html[`updateConfig(DeltaJobConfig deltaConfig)`]. | <> |`RESUME` -|Resume a suspended job. For details, see the API reference for the link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/Job.html#resume()[`resume()`] method. +|Resume a suspended job. For details, see the API reference for the link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/Job.html#resume()[`resume()`] method. |<> |`RESTART` -|Suspends and resumes the job. For details, see the API reference for the link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/Job.html#restart()[`restart()`] method. +|Suspends and resumes the job. For details, see the API reference for the link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/Job.html#restart()[`restart()`] method. |<> |=== diff --git a/docs/modules/sql/pages/create-data-connection.adoc b/docs/modules/sql/pages/create-data-connection.adoc index 2f6d4c9ab..24f9b2b29 100644 --- a/docs/modules/sql/pages/create-data-connection.adoc +++ b/docs/modules/sql/pages/create-data-connection.adoc @@ -53,7 +53,7 @@ If a data connection of the same name has been configured programmatically or in |Every time you issue a query against a SQL mapping, a new physical connection to the external system is created. |`SHARED` (default) -|A reusable data connection. See the implementation of each data connection type for full details of reusability: link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/dataconnection/HazelcastDataConnection.html[`HazelcastDataConnection`], link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/kafka/KafkaDataConnection.html[`KafkaDataConnection`], link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/mongodb/dataconnection/MongoDataConnection.html[`MongoDataConnection`]. +|A reusable data connection. See the implementation of each data connection type for full details of reusability: link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/dataconnection/HazelcastDataConnection.html[`HazelcastDataConnection`], link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/kafka/KafkaDataConnection.html[`KafkaDataConnection`], link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/mongodb/dataconnection/MongoDataConnection.html[`MongoDataConnection`]. |`OPTIONS` |Configuration options for the chosen `connection_type`. See <> and xref:data-connections:data-connections-configuration.adoc[Configuring Data Connections to External Systems] for valid parameters for specific connections. diff --git a/docs/modules/sql/pages/sql-overview.adoc b/docs/modules/sql/pages/sql-overview.adoc index 65987b8b3..81e342046 100644 --- a/docs/modules/sql/pages/sql-overview.adoc +++ b/docs/modules/sql/pages/sql-overview.adoc @@ -16,7 +16,7 @@ You can connect to the SQL service of a Hazelcast member using one of the follow - <> or xref:{page-latest-supported-mc}@management-center:tools:sql-browser.adoc[Management Center]: For fast prototyping. -- link:https://github.com/hazelcast/hazelcast-jdbc/blob/main/README.md[JDBC driver] or the link:https://docs.hazelcast.org/docs/{page-latest-supported-java-client}/javadoc/com/hazelcast/sql/SqlService.html[Java client]: For Java applications. +- link:https://github.com/hazelcast/hazelcast-jdbc/blob/main/README.md[JDBC driver] or the link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/sql/SqlService.html[Java client]: For Java applications. - link:http://hazelcast.github.io/hazelcast-nodejs-client/api/{page-latest-supported-nodejs-client}/docs/modules/sql_SqlService.html[Node.js client]. diff --git a/docs/modules/storage/pages/configuring-persistence.adoc b/docs/modules/storage/pages/configuring-persistence.adoc index 04e3a9f2b..2b9bb7a12 100644 --- a/docs/modules/storage/pages/configuring-persistence.adoc +++ b/docs/modules/storage/pages/configuring-persistence.adoc @@ -635,7 +635,7 @@ hazelcast: Java:: + -- -Add configuration options to the link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/config/EncryptionAtRestConfig.html[`EncryptionAtRestConfig` object]. +Add configuration options to the link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/config/EncryptionAtRestConfig.html[`EncryptionAtRestConfig` object]. [source,java] ---- @@ -1073,7 +1073,7 @@ hazelcast: Java:: + -- -Add configuration options to the link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/config/MapConfig.html[`MapConfig` object]. +Add configuration options to the link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/config/MapConfig.html[`MapConfig` object]. [source,java] ---- @@ -1353,7 +1353,7 @@ hazelcast: Java:: + -- -Use the link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/config/JetConfig.html[`JetConfig` object]. +Use the link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/config/JetConfig.html[`JetConfig` object]. [source,java] ---- diff --git a/docs/modules/transactions/pages/creating-a-transaction-interface.adoc b/docs/modules/transactions/pages/creating-a-transaction-interface.adoc index 6642697bc..f070f6e25 100644 --- a/docs/modules/transactions/pages/creating-a-transaction-interface.adoc +++ b/docs/modules/transactions/pages/creating-a-transaction-interface.adoc @@ -10,7 +10,7 @@ You create a `TransactionContext` object to begin, commit and rollback a transaction. You can obtain transaction-aware instances of queues, maps, sets, lists and multimaps via `TransactionContext`, work with them and commit/rollback in one shot. You can see the TransactionContext API -https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/transaction/TransactionContext.html[here^]. +https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/transaction/TransactionContext.html[here^]. Hazelcast supports two types of transactions: ONE_PHASE and TWO_PHASE. The type of transaction controls what happens when a member crashes diff --git a/docs/modules/transactions/pages/providing-xa-transactions.adoc b/docs/modules/transactions/pages/providing-xa-transactions.adoc index 4e7f5fd6f..68033f588 100644 --- a/docs/modules/transactions/pages/providing-xa-transactions.adoc +++ b/docs/modules/transactions/pages/providing-xa-transactions.adoc @@ -16,7 +16,7 @@ commit or rollback any particular transaction consistently (all do the same). When you implement the `XAResource` interface, Hazelcast provides XA transactions. You can obtain the `HazelcastXAResource` instance via the `HazelcastInstance getXAResource` method. You can see the HazelcastXAResource API -https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/transaction/HazelcastXAResource.html[here^]. +https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/transaction/HazelcastXAResource.html[here^]. Below is example code that uses JTA API for transaction management. diff --git a/docs/modules/troubleshoot/pages/error-handling.adoc b/docs/modules/troubleshoot/pages/error-handling.adoc index 97361d940..98c1dc08c 100644 --- a/docs/modules/troubleshoot/pages/error-handling.adoc +++ b/docs/modules/troubleshoot/pages/error-handling.adoc @@ -54,9 +54,9 @@ reprocess their input. === Processing Guarantees Streaming jobs with mutable state, those with a xref:fault-tolerance:fault-tolerance.adoc#processing-guarantee-is-a-shared-concern[processing guarantee] -set, achieve fault tolerance by periodically saving xref:fault-tolerance:fault-tolerance.adoc#distributed-snapshot[recovery snapshots]. If a streaming job was allowed to fail, the snapshots would be deleted. For this reason, by default all streaming jobs are suspended on failure, instead of failing completely. For more details, see link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/config/JobConfig.html#setSuspendOnFailure(boolean)[`JobConfig.setSuspendOnFailure`] +set, achieve fault tolerance by periodically saving xref:fault-tolerance:fault-tolerance.adoc#distributed-snapshot[recovery snapshots]. If a streaming job was allowed to fail, the snapshots would be deleted. For this reason, by default all streaming jobs are suspended on failure, instead of failing completely. For more details, see link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/config/JobConfig.html#setSuspendOnFailure(boolean)[`JobConfig.setSuspendOnFailure`] and -link:https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/jet/Job.html#getSuspensionCause()[`Job.getSuspensionCause`]. +link:https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/jet/Job.html#getSuspensionCause()[`Job.getSuspensionCause`]. NOTE: If you use the xref:sql:create-job.adoc#using-a-jobstatuslistener[CREATE JOB statement] to submit a job to your Hazelcast cluster, the job is automatically set to suspend on failure. diff --git a/docs/modules/wan/pages/advanced-features.adoc b/docs/modules/wan/pages/advanced-features.adoc index 7da06e5e3..4668bb999 100644 --- a/docs/modules/wan/pages/advanced-features.adoc +++ b/docs/modules/wan/pages/advanced-features.adoc @@ -165,7 +165,7 @@ To be able to use Delta WAN synchronization for a Hazelcast data structure: 1 - Configure the WAN synchronization mechanism for your WAN publisher so that it uses the Merkle tree: If configuring declaratively, you can use the `consistency-check-strategy` sub-element of the `sync` element. If configuring programmatically, you can use the setter of the -https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/config/WanSyncConfig.html[WanSyncConfig^] object. +https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/config/WanSyncConfig.html[WanSyncConfig^] object. Here is a declarative example: [tabs] @@ -277,7 +277,7 @@ XML:: ==== You can programmatically configure it, too, using the -https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/config/MerkleTreeConfig.html[MerkleTreeConfig^] object. +https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/config/MerkleTreeConfig.html[MerkleTreeConfig^] object. Here is the full declarative configuration example showing how to enable Delta WAN Synchronization, bind it to a Hazelcast data structure (an IMap in this case) and specify its depth: @@ -949,8 +949,8 @@ Map and Cache have different filter interfaces: `MapWanEventFilter` and `CacheWanEventFilter`. Both of these interfaces have the method `filter` which takes the following parameters: * `mapName`/`cacheName`: Name of the related data structure. -* `entryView`: https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/core/EntryView.html[EntryView^] -or https://docs.hazelcast.org/docs/{full-version}/javadoc/com/hazelcast/cache/CacheEntryView.html[CacheEntryView^] depending on the data structure. +* `entryView`: https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/core/EntryView.html[EntryView^] +or https://docs.hazelcast.org/docs/{os-version}/javadoc/com/hazelcast/cache/CacheEntryView.html[CacheEntryView^] depending on the data structure. * `eventType`: Enum type - `UPDATED(1)`, `REMOVED(2)` or `LOADED(3)` - depending on the event. NOTE: `LOADED` events are filtered out and not replicated to target cluster.