diff --git a/dev-docs/releasing.adoc b/dev-docs/releasing.adoc new file mode 100644 index 00000000000..ab17095b65e --- /dev/null +++ b/dev-docs/releasing.adoc @@ -0,0 +1,82 @@ + += Releasing Solr +:toc: left + +== Motivated? +So you're of the opinion that there are unreleased features or bugfixes committed to the Solr repository that the world needs? +Are you so convinced of this that you are willing to volunteer to make it happen? +Good! This document tells you how to get started. + +== Overview +The following is an overview of the artifacts you will be publishing. Although the release wizard should be your primary guide, having a picture of what's going on may help you understand and validate what the release wizard is asking you to do. + +There are five major parts of a release. They all become available to the public in slightly different ways, and it helps to understand these differences. + +IMPORTANT: All of these publications experience a time delay while background infrastructure detects and propagates your changes. There will be points in the release process when you just need to wait. + +=== Source Code and Binaries (downloads.apache.org) + +The distribution of a specific version of the source code is the theoretical center of any release by the Apache Software Foundation. As a convenience, precompiled binaries are also provided on downloads.apache.org. The mechanics of this process start with an SVN commit. The result of that commit is automatically synced to the downloads server (~5m?), and then on a longer time frame (6h?) anything on downloads is synced to archives.apache.org. See https://infra.apache.org/release-publishing.html#timeline for more detail + +=== Maven Artifacts (repository.apache.org) +The compiled jar files, source jars, javadoc and pom need to be distributed to the maven ecosystem. This happens via repository.apache.org which is then automatically synced into maven central (~2h, upto 24h for maven central search). If you have released software to maven central via Sonatype, you will see that Apache uses the same repository manager software (Nexus), and this interface will look familiar to you. + +=== The Docker Image (hub.docker.com/_/solr) +The official solr docker image will be published for use by the world. The latency for this publication is unknown to me, (it went smoothly, and I forgot to make a note of it) but someone who figures that out can update this section. + +=== The Web Site (solr.apache.org) +Our website must be updated with each release. It is based on the content of the solr-site git repository and checking in changes to the `main` branch there will automatically become available for preview within a few minutes at https://solr.staged.apache.org/[https://solr.staged.apache.org/]. Merging main into a branch named `production` publishes your updates to the live website also within a few minutes. Note that it is normal for the staging site links to javadocs and ref guide to return 404 because these are published by a different process. + +=== The Reference Guide (solr.apache.org/guide) +The ref guide is published once every 24h by a Jenkins build found at https://ci-builds.apache.org/job/Solr/job/solr-reference-guide-official/[https://ci-builds.apache.org/job/Solr/job/solr-reference-guide-official/]. This is a complicated process that has been automated for you. It is primarily influenced by the publication of an `antora.yaml` file during the steps the release wizard will guide you through. If the Jenkins build runs while there is a branch, but no updated antora.yaml file, an `M.N-beta` reference may be seen on the live ref guide, but there will be no `M.N-beta` ref guide pages and selecting it will merely browse the latest. After the Jenkins build runs, it takes several hours for the new version to become visible in a browser (possibly due to a caching layer?). There's a window of 6-30h after the release where this is not of concern, so don't panic. You can check the above jenkins job while you wait to estimate when the changes will be expected to become visible. + +== Step 0 - Become a Committer +To do a release you must become a https://community.apache.org/contributors/becomingacommitter.html[committer] on the project. Additionally, if you are not on the https://apache.org/foundation/how-it-works/#pmc[Project Management Committee (PMC)] you will also need to take special steps, and you will need to partner with a PMC member for at least one step. See https://www.apache.org/legal/release-policy.html#upload-ci + +== Step 1 - Run the Release Wizard! + +But wait! don't I need community approval?! (you exclaim) + +Yes! But there are some details to take care of that don't require anyone's approval, and getting comfortable with the release wizard will make it smoother, so you should run through the early sections of the release wizard first. + +=== Where to find the release wizard + +The release wizard is found in a checkout of solr at `dev-tools/scripts/releaseWizard.py` + +=== What working copy to use + +The release wizard is meant to be used from any working copy you like, with the expectation that you will check out the PARENT branch. In other words you should check out main to publish M.0.0 and branch_Mx to publish M.N.0, and branch M.N.x-1 for M.N.x. The wizard will (eventually) guide you into creating a fresh, clean checkout where your intended release is built, but you don't need that at the start. + +=== How to run the release wizard + +1. Make sure you have python 3.4+ installed (if a higher version becomes required, the wizard should complain and tell you what it is) +2. Install dependencies with `pip3 install -r requirements.txt`, from the `dev-tools/scripts` folder. +3. Run the command you see documented at `dev-tools/scripts/README.md`. Using `--dry-run` initially is fine. + +NOTE: `--dry-run` does still create `~/.solr_releases` and `~/.solrrc` files and record the release version you intend for future reference so one might say it's really better described as a "slightly damp" run :). It should however not execute other commands (in theory). Also, checklist elements approved during a dry run are remembered, but this may change, see https://issues.apache.org/jira/browse/SOLR-17246[SOLR-17246] + + +== Step 2 - Complete the First two Checklists +CAUTION: The release wizard is software, it may have bugs or not cover every situation. You need to think about the commands you are given. Some of the known pitfalls can be found with https://issues.apache.org/jira/issues/?jql=project%20%3D%20SOLR%20AND%20resolution%20%3D%20Unresolved%20AND%20component%20%3D%20release-scripts%20ORDER%20BY%20priority%20DESC%2C%20updated%20DESC[this jira search]. + +The release wizard is organized into checklists and the first two checklists are preparation/planning related, and are good to complete before proposing the release. + +== Step 3 - Propose the Release + +Mail dev@solr.apache.org with a subject like `[DISCUSS] Solr X.Y release` and wax poetic about the wonderful features and awful bugs that are fixed but not yet released and volunteer to do the release. If you are sufficiently inspiring most of the PMC will quietly say to themselves "Whew! I don't have to do it" and then some of them will respond with a +1. Then at the last minute right before the time you proposed to start the release 1-4 people will suddenly mail the list saying something like "Can we wait for X amount of time so that I can get SOLR-XXXXXX in?" Let the community discuss it, and give them some time because you knew this would happen (because you read this guide!) + +== Step 4 - Continue Following the Release Wizard + +It's supposed to tell you all you need from here... Mail the list or post on `#solr-dev` Slack if you have questions, and file Jira issues for things that could be improved for the next person (please set component to `release-scripts`). + +== Step 5 - Complete the After Release Steps in the Wizard too. + +There are some things that need to be done after the release is officially published and announced. The wizard will guide you through that (things like updating Jira etc.) + +== Dive deeper + +The https://infra.apache.org/release-publishing.html[Release Creation Process] we page gives more in-depth explanation of an Apache release. + +The origins of the ReleaseWizard is described in https://www.linkedin.com/pulse/releasing-lucene-just-61-steps-jan-h%C3%B8ydahl/[this blog post] from 2019. The same tool is currently used by Lucene, Solr and Solr-Operator. + + diff --git a/gradlew b/gradlew index 308a3239001..c0f76e91038 100755 --- a/gradlew +++ b/gradlew @@ -157,13 +157,15 @@ if [ "$cygwin" = "true" -o "$msys" = "true" ] ; then fi GRADLE_WRAPPER_JAR="$APP_HOME/gradle/wrapper/gradle-wrapper.jar" -"$JAVACMD" $JAVA_OPTS --source 11 "$APP_HOME/buildSrc/src/main/java/org/apache/lucene/gradle/WrapperDownloader.java" "$GRADLE_WRAPPER_JAR" -WRAPPER_STATUS=$? -if [ "$WRAPPER_STATUS" -eq 1 ]; then - echo "ERROR: Something went wrong. Make sure you're using Java version between 11 and 21." - exit $WRAPPER_STATUS -elif [ "$WRAPPER_STATUS" -ne 0 ]; then - exit $WRAPPER_STATUS +if [ ! -e "$GRADLE_WRAPPER_JAR" ]; then + "$JAVACMD" $JAVA_OPTS "$APP_HOME/buildSrc/src/main/java/org/apache/lucene/gradle/WrapperDownloader.java" "$GRADLE_WRAPPER_JAR" + WRAPPER_STATUS=$? + if [ "$WRAPPER_STATUS" -eq 1 ]; then + echo "ERROR: Something went wrong. Make sure you're using Java version between 11 and 21." + exit $WRAPPER_STATUS + elif [ "$WRAPPER_STATUS" -ne 0 ]; then + exit $WRAPPER_STATUS + fi fi CLASSPATH=$GRADLE_WRAPPER_JAR @@ -171,7 +173,7 @@ CLASSPATH=$GRADLE_WRAPPER_JAR # START OF LUCENE CUSTOMIZATION # Generate gradle.properties if they don't exist if [ ! -e "$APP_HOME/gradle.properties" ]; then - "$JAVACMD" $JAVA_OPTS --source 11 "$APP_HOME/buildSrc/src/main/java/org/apache/lucene/gradle/GradlePropertiesGenerator.java" "$APP_HOME/gradle/template.gradle.properties" "$APP_HOME/gradle.properties" + "$JAVACMD" $JAVA_OPTS "$APP_HOME/buildSrc/src/main/java/org/apache/lucene/gradle/GradlePropertiesGenerator.java" "$APP_HOME/gradle/template.gradle.properties" "$APP_HOME/gradle.properties" GENERATOR_STATUS=$? if [ "$GENERATOR_STATUS" -ne 0 ]; then exit $GENERATOR_STATUS diff --git a/gradlew.bat b/gradlew.bat index cb69a0ab1a7..172618e3ea4 100644 --- a/gradlew.bat +++ b/gradlew.bat @@ -75,9 +75,11 @@ goto fail @rem LUCENE-9266: verify and download the gradle wrapper jar if we don't have one. set GRADLE_WRAPPER_JAR=%APP_HOME%\gradle\wrapper\gradle-wrapper.jar -"%JAVA_EXE%" %JAVA_OPTS% --source 11 "%APP_HOME%/buildSrc/src/main/java/org/apache/lucene/gradle/WrapperDownloader.java" "%GRADLE_WRAPPER_JAR%" -IF %ERRORLEVEL% EQU 1 goto failWithJvmMessage -IF %ERRORLEVEL% NEQ 0 goto fail +IF NOT EXIST "%GRADLE_WRAPPER_JAR%" ( + "%JAVA_EXE%" %JAVA_OPTS% "%APP_HOME%/buildSrc/src/main/java/org/apache/lucene/gradle/WrapperDownloader.java" "%GRADLE_WRAPPER_JAR%" + IF %ERRORLEVEL% EQU 1 goto failWithJvmMessage + IF %ERRORLEVEL% NEQ 0 goto fail +) @rem Setup the command line set CLASSPATH=%GRADLE_WRAPPER_JAR% @@ -87,7 +89,7 @@ set CLASSPATH=%GRADLE_WRAPPER_JAR% IF NOT EXIST "%APP_HOME%\gradle.properties" ( @rem local expansion is needed to check ERRORLEVEL inside control blocks. setlocal enableDelayedExpansion - "%JAVA_EXE%" %JAVA_OPTS% --source 11 "%APP_HOME%/buildSrc/src/main/java/org/apache/lucene/gradle/GradlePropertiesGenerator.java" "%APP_HOME%\gradle\template.gradle.properties" "%APP_HOME%\gradle.properties" + "%JAVA_EXE%" %JAVA_OPTS% "%APP_HOME%/buildSrc/src/main/java/org/apache/lucene/gradle/GradlePropertiesGenerator.java" "%APP_HOME%\gradle\template.gradle.properties" "%APP_HOME%\gradle.properties" IF %ERRORLEVEL% NEQ 0 goto fail endlocal ) diff --git a/solr/CHANGES.txt b/solr/CHANGES.txt index 8681e5dc9b2..0277a098817 100644 --- a/solr/CHANGES.txt +++ b/solr/CHANGES.txt @@ -65,6 +65,7 @@ Deprecation Removals * SOLR-14115: Remove deprecated zkcli script in favour of equivalent bin/solr sub commmands. (Eric Pugh) +* SOLR-17284: Remove deprecated BlobRepository in favour of FileStore. (Eric Pugh) Dependency Upgrades --------------------- @@ -95,6 +96,8 @@ Other Changes * SOLR-17205: De-couple SolrJ required Java version from server Java version (janhoy) +* SOLR-17279: Introduce SecurityJson.java file to Test Framework to consolidate setting up authentication in tests. (Rudy Seitz via Eric Pugh) + ================== 9.7.0 ================== New Features --------------------- @@ -111,6 +114,12 @@ Improvements * SOLR-14115: Add linkconfig, cluster, and updateacls as commands to SolrCLI. Allow bin/solr commands to have parity with zkcli.sh commands. (Eric Pugh) +* SOLR-17274: Allow JSON atomic updates to use multiple modifiers or a modifier like 'set' as a field name + if child docs are not enabled. (Calvin Smith, David Smiley) + +* SOLR-17300: Http2SolrClient.Builder.withHttpClient now copies HttpListenerFactory (e.g. for auth, metrics, traces, etc.) + (Sanjay Dutt, David Smiley) + Optimizations --------------------- * SOLR-17257: Both Minimize Cores and the Affinity replica placement strategies would over-gather @@ -122,6 +131,17 @@ Bug Fixes * SOLR-17261: Remove unintended timeout of 60 seconds for core loading. (Houston Putman) +* SOLR-17049: Actually mark all replicas down at startup and truly wait for them. + This includes replicas that might not exist anymore locally. (Houston Putman, Vincent Primault) + +* SOLR-17263: Fix 'Illegal character in query' exception in HttpJdkSolrClient (Andy Webb) + +* SOLR-17275: SolrJ ZkClientClusterStateProvider, revert SOLR-17153 for perf regression when aliases are used. (Aparna Suresh) + +* PR-2475: Fixed node listing bug in Admin UI when different hostnames start with the same front part. (@hgdharold via Eric Pugh) + +* SOLR-16659: Properly construct V2 base urls instead of replacing substring "/solr" with "/api" (Andrey Bozhko via Eric Pugh) + Dependency Upgrades --------------------- (No changes) @@ -133,8 +153,19 @@ Other Changes * SOLR-16505: Use Jetty HTTP2 for index replication and other "recovery" operations (Sanjay Dutt, David Smiley) +* GITHUB#2454: Refactor preparePutOrPost method in HttpJdkSolrClient (Andy Webb) + +* SOLR-16503: Use Jetty HTTP2 for SyncStrategy and PeerSyncWithLeader for "recovery" operations (Sanjay Dutt, David Smiley) + * SOLR-16503: Introduce new default Http2SolrClient in UpdateShardHandler and deprecated old default HttpClient (Sanjay Dutt, David Smiley) +================== 9.6.1 ================== +Bug Fixes +--------------------- +* SOLR-17296: Remove (broken) singlePass attempt when reRankScale + debug is used, and fix underlying NPE. (hossman) + +* SOLR-17307: Use the system file separator instead of an explicit '/' in CachingDirectoryFactory (Houston Putman, hossman) + ================== 9.6.0 ================== New Features --------------------- diff --git a/solr/core/src/java/org/apache/solr/client/solrj/embedded/EmbeddedSolrServer.java b/solr/core/src/java/org/apache/solr/client/solrj/embedded/EmbeddedSolrServer.java index 5706543def4..49a887c9772 100644 --- a/solr/core/src/java/org/apache/solr/client/solrj/embedded/EmbeddedSolrServer.java +++ b/solr/core/src/java/org/apache/solr/client/solrj/embedded/EmbeddedSolrServer.java @@ -164,7 +164,8 @@ public NamedList request(SolrRequest request, String coreName) if (handler != null) { try { SolrQueryRequest req = - _parser.buildRequestFrom(null, request.getParams(), getContentStreams(request)); + _parser.buildRequestFrom( + null, request.getParams(), getContentStreams(request), request.getUserPrincipal()); req.getContext().put("httpMethod", request.getMethod().name()); req.getContext().put(PATH, path); SolrQueryResponse resp = new SolrQueryResponse(); diff --git a/solr/core/src/java/org/apache/solr/cloud/SyncStrategy.java b/solr/core/src/java/org/apache/solr/cloud/SyncStrategy.java index f07125cf975..641542038a4 100644 --- a/solr/core/src/java/org/apache/solr/cloud/SyncStrategy.java +++ b/solr/core/src/java/org/apache/solr/cloud/SyncStrategy.java @@ -24,10 +24,9 @@ import java.util.List; import java.util.concurrent.ExecutorService; import java.util.concurrent.TimeUnit; -import org.apache.http.client.HttpClient; import org.apache.solr.client.solrj.SolrClient; import org.apache.solr.client.solrj.SolrServerException; -import org.apache.solr.client.solrj.impl.HttpSolrClient; +import org.apache.solr.client.solrj.impl.Http2SolrClient; import org.apache.solr.client.solrj.request.CoreAdminRequest.RequestRecovery; import org.apache.solr.common.cloud.ZkCoreNodeProps; import org.apache.solr.common.cloud.ZkNodeProps; @@ -55,7 +54,7 @@ public class SyncStrategy { private volatile boolean isClosed; - private final HttpClient client; + private final Http2SolrClient solrClient; private final ExecutorService updateExecutor; @@ -69,7 +68,7 @@ private static class RecoveryRequest { public SyncStrategy(CoreContainer cc) { UpdateShardHandler updateShardHandler = cc.getUpdateShardHandler(); - client = updateShardHandler.getDefaultHttpClient(); + solrClient = updateShardHandler.getRecoveryOnlyHttpClient(); shardHandler = cc.getShardHandlerFactory().getShardHandler(); updateExecutor = updateShardHandler.getUpdateExecutor(); } @@ -361,12 +360,11 @@ private void requestRecovery( RequestRecovery recoverRequestCmd = new RequestRecovery(); recoverRequestCmd.setAction(CoreAdminAction.REQUESTRECOVERY); recoverRequestCmd.setCoreName(coreName); - try (SolrClient client = - new HttpSolrClient.Builder(baseUrl) - .withHttpClient(SyncStrategy.this.client) + new Http2SolrClient.Builder(baseUrl) + .withHttpClient(solrClient) .withConnectionTimeout(30000, TimeUnit.MILLISECONDS) - .withSocketTimeout(120000, TimeUnit.MILLISECONDS) + .withIdleTimeout(120000, TimeUnit.MILLISECONDS) .build()) { client.request(recoverRequestCmd); } catch (Throwable t) { diff --git a/solr/core/src/java/org/apache/solr/cloud/ZkController.java b/solr/core/src/java/org/apache/solr/cloud/ZkController.java index a2377ceedd9..26e659d4db6 100644 --- a/solr/core/src/java/org/apache/solr/cloud/ZkController.java +++ b/solr/core/src/java/org/apache/solr/cloud/ZkController.java @@ -39,6 +39,7 @@ import java.util.Locale; import java.util.Map; import java.util.Objects; +import java.util.Optional; import java.util.Set; import java.util.concurrent.Callable; import java.util.concurrent.ConcurrentHashMap; @@ -50,6 +51,7 @@ import java.util.concurrent.TimeoutException; import java.util.concurrent.atomic.AtomicReference; import java.util.function.Supplier; +import java.util.stream.Collectors; import org.apache.solr.client.solrj.SolrClient; import org.apache.solr.client.solrj.cloud.SolrCloudManager; import org.apache.solr.client.solrj.impl.CloudLegacySolrClient; @@ -1098,15 +1100,12 @@ public void publishAndWaitForDownStates() throws KeeperException, InterruptedExc publishAndWaitForDownStates(WAIT_DOWN_STATES_TIMEOUT_SECONDS); } - public void publishAndWaitForDownStates(int timeoutSeconds) - throws KeeperException, InterruptedException { - - publishNodeAsDown(getNodeName()); + public void publishAndWaitForDownStates(int timeoutSeconds) throws InterruptedException { + final String nodeName = getNodeName(); - Set collectionsWithLocalReplica = ConcurrentHashMap.newKeySet(); - for (CoreDescriptor descriptor : cc.getCoreDescriptors()) { - collectionsWithLocalReplica.add(descriptor.getCloudDescriptor().getCollectionName()); - } + Collection collectionsWithLocalReplica = publishNodeAsDown(nodeName); + Map collectionsAlreadyVerified = + new ConcurrentHashMap<>(collectionsWithLocalReplica.size()); CountDownLatch latch = new CountDownLatch(collectionsWithLocalReplica.size()); for (String collectionWithLocalReplica : collectionsWithLocalReplica) { @@ -1114,25 +1113,17 @@ public void publishAndWaitForDownStates(int timeoutSeconds) collectionWithLocalReplica, (collectionState) -> { if (collectionState == null) return false; - boolean foundStates = true; - for (CoreDescriptor coreDescriptor : cc.getCoreDescriptors()) { - if (coreDescriptor - .getCloudDescriptor() - .getCollectionName() - .equals(collectionWithLocalReplica)) { - Replica replica = - collectionState.getReplica( - coreDescriptor.getCloudDescriptor().getCoreNodeName()); - if (replica == null || replica.getState() != Replica.State.DOWN) { - foundStates = false; - } - } - } - - if (foundStates && collectionsWithLocalReplica.remove(collectionWithLocalReplica)) { + boolean allStatesCorrect = + Optional.ofNullable(collectionState.getReplicas(nodeName)).stream() + .flatMap(List::stream) + .allMatch(replica -> replica.getState() == Replica.State.DOWN); + + if (allStatesCorrect + && collectionsAlreadyVerified.putIfAbsent(collectionWithLocalReplica, true) + == null) { latch.countDown(); } - return foundStates; + return allStatesCorrect; }); } @@ -2849,9 +2840,14 @@ public boolean checkIfCoreNodeNameAlreadyExists(CoreDescriptor dcore) { * Best effort to set DOWN state for all replicas on node. * * @param nodeName to operate on + * @return the names of the collections that have replicas on the given node */ - public void publishNodeAsDown(String nodeName) { + public Collection publishNodeAsDown(String nodeName) { log.info("Publish node={} as DOWN", nodeName); + + ClusterState clusterState = getClusterState(); + Map> replicasPerCollectionOnNode = + clusterState.getReplicaNamesPerCollectionOnNode(nodeName); if (distributedClusterStateUpdater.isDistributedStateUpdate()) { // Note that with the current implementation, when distributed cluster state updates are // enabled, we mark the node down synchronously from this thread, whereas the Overseer cluster @@ -2862,24 +2858,15 @@ public void publishNodeAsDown(String nodeName) { distributedClusterStateUpdater.executeNodeDownStateUpdate(nodeName, zkStateReader); } else { try { - // Create a concurrently accessible set to avoid repeating collections - Set processedCollections = new HashSet<>(); - for (CoreDescriptor cd : cc.getCoreDescriptors()) { - String collName = cd.getCollectionName(); + for (String collName : replicasPerCollectionOnNode.keySet()) { DocCollection coll; if (collName != null - && processedCollections.add(collName) && (coll = zkStateReader.getCollection(collName)) != null && coll.isPerReplicaState()) { - final List replicasToDown = new ArrayList<>(coll.getSlicesMap().size()); - coll.forEachReplica( - (s, replica) -> { - if (replica.getNodeName().equals(nodeName)) { - replicasToDown.add(replica.getName()); - } - }); PerReplicaStatesOps.downReplicas( - replicasToDown, + replicasPerCollectionOnNode.get(collName).stream() + .map(Replica::getName) + .collect(Collectors.toList()), PerReplicaStatesOps.fetch( coll.getZNode(), zkClient, coll.getPerReplicaStates())) .persist(coll.getZNode(), zkClient); @@ -2904,6 +2891,7 @@ public void publishNodeAsDown(String nodeName) { log.warn("Could not publish node as down: ", e); } } + return replicasPerCollectionOnNode.keySet(); } /** diff --git a/solr/core/src/java/org/apache/solr/cloud/overseer/NodeMutator.java b/solr/core/src/java/org/apache/solr/cloud/overseer/NodeMutator.java index 8afcd805233..f54d2569ef5 100644 --- a/solr/core/src/java/org/apache/solr/cloud/overseer/NodeMutator.java +++ b/solr/core/src/java/org/apache/solr/cloud/overseer/NodeMutator.java @@ -18,11 +18,9 @@ import java.lang.invoke.MethodHandles; import java.util.ArrayList; -import java.util.Collection; import java.util.LinkedHashMap; import java.util.List; import java.util.Map; -import java.util.Map.Entry; import java.util.Optional; import org.apache.solr.client.solrj.cloud.SolrCloudManager; import org.apache.solr.common.cloud.ClusterState; @@ -82,39 +80,21 @@ public static Optional computeCollectionUpdate( String nodeName, String collectionName, DocCollection docCollection, SolrZkClient client) { boolean needToUpdateCollection = false; List downedReplicas = new ArrayList<>(); - Map slicesCopy = new LinkedHashMap<>(docCollection.getSlicesMap()); + final Map slicesCopy = new LinkedHashMap<>(docCollection.getSlicesMap()); - for (Entry sliceEntry : slicesCopy.entrySet()) { - Slice slice = sliceEntry.getValue(); - Map newReplicas = slice.getReplicasCopy(); - - Collection replicas = slice.getReplicas(); - for (Replica replica : replicas) { - String rNodeName = replica.getNodeName(); - if (rNodeName == null) { - throw new RuntimeException("Replica without node name! " + replica); - } - if (rNodeName.equals(nodeName)) { - log.debug("Update replica state for {} to {}", replica, Replica.State.DOWN); - Map props = replica.shallowCopy(); - Replica newReplica = - new Replica( - replica.getName(), - replica.node, - replica.collection, - slice.getName(), - replica.core, - Replica.State.DOWN, - replica.type, - props); - newReplicas.put(replica.getName(), newReplica); - needToUpdateCollection = true; - downedReplicas.add(replica.getName()); - } + List replicasOnNode = docCollection.getReplicas(nodeName); + if (replicasOnNode == null || replicasOnNode.isEmpty()) { + return Optional.empty(); + } + for (Replica replica : replicasOnNode) { + if (replica.getState() != Replica.State.DOWN) { + log.debug("Update replica state for {} to {}", replica, Replica.State.DOWN); + needToUpdateCollection = true; + downedReplicas.add(replica.getName()); + slicesCopy.computeIfPresent( + replica.getShard(), + (name, slice) -> slice.copyWith(replica.copyWith(Replica.State.DOWN))); } - - Slice newSlice = new Slice(slice.getName(), newReplicas, slice.shallowCopy(), collectionName); - sliceEntry.setValue(newSlice); } if (needToUpdateCollection) { diff --git a/solr/core/src/java/org/apache/solr/core/BlobRepository.java b/solr/core/src/java/org/apache/solr/core/BlobRepository.java deleted file mode 100644 index b2960a67151..00000000000 --- a/solr/core/src/java/org/apache/solr/core/BlobRepository.java +++ /dev/null @@ -1,367 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ -package org.apache.solr.core; - -import static org.apache.solr.common.SolrException.ErrorCode.SERVER_ERROR; -import static org.apache.solr.common.SolrException.ErrorCode.SERVICE_UNAVAILABLE; - -import java.io.InputStream; -import java.lang.invoke.MethodHandles; -import java.math.BigInteger; -import java.nio.ByteBuffer; -import java.security.MessageDigest; -import java.security.NoSuchAlgorithmException; -import java.util.ArrayList; -import java.util.Collections; -import java.util.HashSet; -import java.util.List; -import java.util.Locale; -import java.util.Map; -import java.util.Random; -import java.util.Set; -import java.util.concurrent.Callable; -import java.util.concurrent.ConcurrentHashMap; -import java.util.regex.Pattern; -import org.apache.http.HttpEntity; -import org.apache.http.HttpResponse; -import org.apache.http.client.HttpClient; -import org.apache.http.client.methods.HttpGet; -import org.apache.solr.common.SolrException; -import org.apache.solr.common.cloud.ClusterState; -import org.apache.solr.common.cloud.DocCollection; -import org.apache.solr.common.cloud.Replica; -import org.apache.solr.common.cloud.Slice; -import org.apache.solr.common.cloud.ZkStateReader; -import org.apache.solr.common.params.CollectionAdminParams; -import org.apache.solr.common.util.StrUtils; -import org.apache.solr.common.util.Utils; -import org.apache.zookeeper.server.ByteBufferInputStream; -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; - -/** - * The purpose of this class is to store the Jars loaded in memory and to keep only one copy of the - * Jar in a single node. - */ -public class BlobRepository { - - private static final long MAX_JAR_SIZE = - Long.parseLong(System.getProperty("runtime.lib.size", String.valueOf(5 * 1024 * 1024))); - private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass()); - public static final Random RANDOM; - static final Pattern BLOB_KEY_PATTERN_CHECKER = Pattern.compile(".*/\\d+"); - - static { - // We try to make things reproducible in the context of our tests by initializing the random - // instance based on the current seed - String seed = System.getProperty("tests.seed"); - if (seed == null) { - RANDOM = new Random(); - } else { - RANDOM = new Random(seed.hashCode()); - } - } - - private final CoreContainer coreContainer; - - @SuppressWarnings({"rawtypes"}) - private Map blobs = createMap(); - - // for unit tests to override - @SuppressWarnings({"rawtypes"}) - ConcurrentHashMap createMap() { - return new ConcurrentHashMap<>(); - } - - public BlobRepository(CoreContainer coreContainer) { - this.coreContainer = coreContainer; - } - - // I wanted to {@link SolrCore#loadDecodeAndCacheBlob(String, Decoder)} below but precommit - // complains - - /** - * Returns the contents of a blob containing a ByteBuffer and increments a reference count. Please - * return the same object to decrease the refcount. This is normally used for storing jar files, - * and binary raw data. If you are caching Java Objects you want to use {@code - * SolrCore#loadDecodeAndCacheBlob(String, Decoder)} - * - * @param key it is a combination of blobname and version like blobName/version - * @return The reference of a blob - */ - public BlobContentRef getBlobIncRef(String key) { - return getBlobIncRef(key, () -> addBlob(key)); - } - - /** - * Internal method that returns the contents of a blob and increments a reference count. Please - * return the same object to decrease the refcount. Only the decoded content will be cached when - * this method is used. Component authors attempting to share objects across cores should use - * {@code SolrCore#loadDecodeAndCacheBlob(String, Decoder)} which ensures that a proper close hook - * is also created. - * - * @param key it is a combination of blob name and version like blobName/version - * @param decoder a decoder that knows how to interpret the bytes from the blob - * @return The reference of a blob - */ - BlobContentRef getBlobIncRef(String key, Decoder decoder) { - return getBlobIncRef(key.concat(decoder.getName()), () -> addBlob(key, decoder)); - } - - BlobContentRef getBlobIncRef(String key, Decoder decoder, String url, String sha512) { - StringBuilder keyBuilder = new StringBuilder(key); - if (decoder != null) keyBuilder.append(decoder.getName()); - keyBuilder.append("/").append(sha512); - - return getBlobIncRef( - keyBuilder.toString(), - () -> new BlobContent<>(key, fetchBlobAndVerify(key, url, sha512), decoder)); - } - - // do the actual work returning the appropriate type... - @SuppressWarnings({"unchecked"}) - private BlobContentRef getBlobIncRef(String key, Callable> blobCreator) { - BlobContent aBlob; - if (this.coreContainer.isZooKeeperAware()) { - synchronized (blobs) { - aBlob = blobs.get(key); - if (aBlob == null) { - try { - aBlob = blobCreator.call(); - } catch (Exception e) { - throw new SolrException( - SolrException.ErrorCode.SERVER_ERROR, "Blob loading failed: " + e.getMessage(), e); - } - } - } - } else { - throw new SolrException( - SolrException.ErrorCode.SERVER_ERROR, "Blob loading is not supported in non-cloud mode"); - // todo - } - BlobContentRef ref = new BlobContentRef<>(aBlob); - synchronized (aBlob.references) { - aBlob.references.add(ref); - } - return ref; - } - - // For use cases sharing raw bytes - private BlobContent addBlob(String key) { - ByteBuffer b = fetchBlob(key); - BlobContent aBlob = new BlobContent<>(key, b); - blobs.put(key, aBlob); - return aBlob; - } - - // for use cases sharing java objects - private BlobContent addBlob(String key, Decoder decoder) { - ByteBuffer b = fetchBlob(key); - String keyPlusName = key + decoder.getName(); - BlobContent aBlob = new BlobContent<>(keyPlusName, b, decoder); - blobs.put(keyPlusName, aBlob); - return aBlob; - } - - static String INVALID_JAR_MSG = - "Invalid jar from {0} , expected sha512 hash : {1} , actual : {2}"; - - private ByteBuffer fetchBlobAndVerify(String key, String url, String sha512) { - ByteBuffer byteBuffer = fetchFromUrl(key, url); - String computedDigest = sha512Digest(byteBuffer); - if (!computedDigest.equals(sha512)) { - throw new SolrException( - SERVER_ERROR, StrUtils.formatString(INVALID_JAR_MSG, url, sha512, computedDigest)); - } - return byteBuffer; - } - - public static String sha512Digest(ByteBuffer byteBuffer) { - MessageDigest digest = null; - try { - digest = MessageDigest.getInstance("SHA-512"); - } catch (NoSuchAlgorithmException e) { - // unlikely - throw new SolrException(SERVER_ERROR, e); - } - digest.update(byteBuffer); - return String.format(Locale.ROOT, "%0128x", new BigInteger(1, digest.digest())); - } - - /** Package local for unit tests only please do not use elsewhere */ - ByteBuffer fetchBlob(String key) { - Replica replica = getSystemCollReplica(); - String url = - replica.getBaseUrl() - + "/" - + CollectionAdminParams.SYSTEM_COLL - + "/blob/" - + key - + "?wt=filestream"; - return fetchFromUrl(key, url); - } - - ByteBuffer fetchFromUrl(String key, String url) { - HttpClient httpClient = coreContainer.getUpdateShardHandler().getDefaultHttpClient(); - HttpGet httpGet = new HttpGet(url); - ByteBuffer b; - HttpResponse response = null; - HttpEntity entity = null; - try { - response = httpClient.execute(httpGet); - entity = response.getEntity(); - int statusCode = response.getStatusLine().getStatusCode(); - if (statusCode != 200) { - throw new SolrException( - SolrException.ErrorCode.NOT_FOUND, "no such blob or version available: " + key); - } - - try (InputStream is = entity.getContent()) { - b = Utils.toByteArray(is, MAX_JAR_SIZE); - } - } catch (Exception e) { - if (e instanceof SolrException) { - throw (SolrException) e; - } else { - throw new SolrException(SolrException.ErrorCode.NOT_FOUND, "could not load : " + key, e); - } - } finally { - Utils.consumeFully(entity); - } - return b; - } - - private Replica getSystemCollReplica() { - ZkStateReader zkStateReader = this.coreContainer.getZkController().getZkStateReader(); - ClusterState cs = zkStateReader.getClusterState(); - DocCollection coll = cs.getCollectionOrNull(CollectionAdminParams.SYSTEM_COLL); - if (coll == null) - throw new SolrException( - SERVICE_UNAVAILABLE, CollectionAdminParams.SYSTEM_COLL + " collection not available"); - ArrayList slices = new ArrayList<>(coll.getActiveSlices()); - if (slices.isEmpty()) - throw new SolrException( - SERVICE_UNAVAILABLE, - "No active slices for " + CollectionAdminParams.SYSTEM_COLL + " collection"); - Collections.shuffle(slices, RANDOM); // do load balancing - - Replica replica = null; - for (Slice slice : slices) { - List replicas = new ArrayList<>(slice.getReplicasMap().values()); - Collections.shuffle(replicas, RANDOM); - for (Replica r : replicas) { - if (r.getState() == Replica.State.ACTIVE) { - if (zkStateReader - .getClusterState() - .getLiveNodes() - .contains(r.get(ZkStateReader.NODE_NAME_PROP))) { - replica = r; - break; - } else { - if (log.isInfoEnabled()) { - log.info( - "replica {} says it is active but not a member of live nodes", - r.get(ZkStateReader.NODE_NAME_PROP)); - } - } - } - } - } - if (replica == null) { - throw new SolrException( - SERVICE_UNAVAILABLE, - "No active replica available for " + CollectionAdminParams.SYSTEM_COLL + " collection"); - } - return replica; - } - - /** - * This is to decrement a ref count - * - * @param ref The reference that is already there. Doing multiple calls with same ref will not - * matter - */ - public void decrementBlobRefCount(BlobContentRef ref) { - if (ref == null) return; - synchronized (ref.blob.references) { - if (!ref.blob.references.remove(ref)) { - log.error("Multiple releases for the same reference"); - } - if (ref.blob.references.isEmpty()) { - blobs.remove(ref.blob.key); - } - } - } - - public static class BlobContent { - public final String key; - // holds byte buffer or cached object, holding both is a waste of memory ref counting mechanism - private final T content; - private final Set> references = new HashSet<>(); - - @SuppressWarnings("unchecked") - public BlobContent(String key, ByteBuffer buffer, Decoder decoder) { - this.key = key; - this.content = - decoder == null ? (T) buffer : decoder.decode(new ByteBufferInputStream(buffer)); - } - - @SuppressWarnings("unchecked") - public BlobContent(String key, ByteBuffer buffer) { - this.key = key; - this.content = (T) buffer; - } - - /** - * Get the cached object. - * - * @return the object representing the content that is cached. - */ - public T get() { - return this.content; - } - } - - public interface Decoder { - - /** - * A name by which to distinguish this decoding. This only needs to be implemented if you want - * to support decoding the same blob content with more than one decoder. - * - * @return The name of the decoding, defaults to empty string. - */ - default String getName() { - return ""; - } - - /** - * A routine that knows how to convert the stream of bytes from the blob into a Java object. - * - * @param inputStream the bytes from a blob - * @return A Java object of the specified type. - */ - T decode(InputStream inputStream); - } - - public static class BlobContentRef { - public final BlobContent blob; - - private BlobContentRef(BlobContent blob) { - this.blob = blob; - } - } -} diff --git a/solr/core/src/java/org/apache/solr/core/CachingDirectoryFactory.java b/solr/core/src/java/org/apache/solr/core/CachingDirectoryFactory.java index 945642f80ae..a2a062a4f79 100644 --- a/solr/core/src/java/org/apache/solr/core/CachingDirectoryFactory.java +++ b/solr/core/src/java/org/apache/solr/core/CachingDirectoryFactory.java @@ -16,6 +16,7 @@ */ package org.apache.solr.core; +import java.io.File; import java.io.IOException; import java.lang.invoke.MethodHandles; import java.nio.file.DirectoryStream; @@ -362,10 +363,10 @@ private void close(CacheValue val) { } private static boolean isSubPath(CacheValue cacheValue, CacheValue otherCacheValue) { - int one = cacheValue.path.lastIndexOf('/'); - int two = otherCacheValue.path.lastIndexOf('/'); + int one = cacheValue.path.lastIndexOf(File.separatorChar); + int two = otherCacheValue.path.lastIndexOf(File.separatorChar); - return otherCacheValue.path.startsWith(cacheValue.path + "/") && two > one; + return otherCacheValue.path.startsWith(cacheValue.path + File.separatorChar) && two > one; } @Override diff --git a/solr/core/src/java/org/apache/solr/core/CoreContainer.java b/solr/core/src/java/org/apache/solr/core/CoreContainer.java index ac27230d548..d0978fcbba2 100644 --- a/solr/core/src/java/org/apache/solr/core/CoreContainer.java +++ b/solr/core/src/java/org/apache/solr/core/CoreContainer.java @@ -251,8 +251,6 @@ public JerseyAppHandlerCache getJerseyAppHandlerCache() { private volatile String hostName; - private final BlobRepository blobRepository = new BlobRepository(this); - private volatile boolean asyncSolrCoreLoad; protected volatile SecurityConfHandler securityConfHandler; @@ -2319,10 +2317,6 @@ public SolrCore getCore(String name, UUID id) { return core; } - public BlobRepository getBlobRepository() { - return blobRepository; - } - /** * If using asyncSolrCoreLoad=true, calling this after {@link #load()} will not return until all * cores have finished loading. diff --git a/solr/core/src/java/org/apache/solr/core/SolrCore.java b/solr/core/src/java/org/apache/solr/core/SolrCore.java index 2b1bd65f292..92a963f6bbe 100644 --- a/solr/core/src/java/org/apache/solr/core/SolrCore.java +++ b/solr/core/src/java/org/apache/solr/core/SolrCore.java @@ -89,7 +89,6 @@ import org.apache.solr.common.cloud.DocCollection; import org.apache.solr.common.cloud.Slice; import org.apache.solr.common.cloud.SolrZkClient; -import org.apache.solr.common.params.CollectionAdminParams; import org.apache.solr.common.params.CommonParams; import org.apache.solr.common.params.CommonParams.EchoParamStyle; import org.apache.solr.common.params.SolrParams; @@ -3540,40 +3539,6 @@ public List getImplicitHandlers() { return ImplicitHolder.INSTANCE; } - /** - * Convenience method to load a blob. This method minimizes the degree to which component and - * other code needs to depend on the structure of solr's object graph and ensures that a proper - * close hook is registered. This method should normally be called in {@link - * SolrCoreAware#inform(SolrCore)}, and should never be called during request processing. The - * Decoder will only run on the first invocations, subsequent invocations will return the cached - * object. - * - * @param key A key in the format of name/version for a blob stored in the {@link - * CollectionAdminParams#SYSTEM_COLL} blob store via the Blob Store API - * @param decoder a decoder with which to convert the blob into a Java Object representation - * (first time only) - * @return a reference to the blob that has already cached the decoded version. - */ - public BlobRepository.BlobContentRef loadDecodeAndCacheBlob( - String key, BlobRepository.Decoder decoder) { - // make sure component authors don't give us oddball keys with no version... - if (!BlobRepository.BLOB_KEY_PATTERN_CHECKER.matcher(key).matches()) { - throw new IllegalArgumentException( - "invalid key format, must end in /N where N is the version number"); - } - // define the blob - BlobRepository.BlobContentRef blobRef = - coreContainer.getBlobRepository().getBlobIncRef(key, decoder); - addCloseHook( - new CloseHook() { - @Override - public void postClose(SolrCore core) { - coreContainer.getBlobRepository().decrementBlobRefCount(blobRef); - } - }); - return blobRef; - } - public CancellableQueryTracker getCancellableQueryTracker() { return cancellableQueryTracker; } diff --git a/solr/core/src/java/org/apache/solr/core/backup/repository/DelegatingBackupRepository.java b/solr/core/src/java/org/apache/solr/core/backup/repository/DelegatingBackupRepository.java index a603c208cb8..e3b27cb073c 100644 --- a/solr/core/src/java/org/apache/solr/core/backup/repository/DelegatingBackupRepository.java +++ b/solr/core/src/java/org/apache/solr/core/backup/repository/DelegatingBackupRepository.java @@ -27,7 +27,7 @@ import org.apache.solr.core.backup.Checksum; /** Delegates to another {@link BackupRepository}. */ -public class DelegatingBackupRepository implements BackupRepository { +public class DelegatingBackupRepository extends AbstractBackupRepository { public static final String PARAM_DELEGATE_REPOSITORY_NAME = "delegateRepoName"; diff --git a/solr/core/src/java/org/apache/solr/filestore/DistribFileStore.java b/solr/core/src/java/org/apache/solr/filestore/DistribFileStore.java index 3cf8cbca3a3..4c62c1ff20e 100644 --- a/solr/core/src/java/org/apache/solr/filestore/DistribFileStore.java +++ b/solr/core/src/java/org/apache/solr/filestore/DistribFileStore.java @@ -175,10 +175,9 @@ private void deleteFile() { private boolean fetchFileFromNodeAndPersist(String fromNode) { log.info("fetching a file {} from {} ", path, fromNode); - String url = - coreContainer.getZkController().getZkStateReader().getBaseUrlForNodeName(fromNode); - if (url == null) throw new SolrException(BAD_REQUEST, "No such node"); - String baseUrl = url.replace("/solr", "/api"); + String baseUrl = + coreContainer.getZkController().getZkStateReader().getBaseUrlV2ForNodeName(fromNode); + if (baseUrl == null) throw new SolrException(BAD_REQUEST, "No such node"); ByteBuffer metadata = null; Map m = null; @@ -220,10 +219,9 @@ boolean fetchFromAnyNode() { ArrayList l = coreContainer.getFileStoreAPI().shuffledNodes(); for (String liveNode : l) { try { - String baseurl = - coreContainer.getZkController().getZkStateReader().getBaseUrlForNodeName(liveNode); - String url = baseurl.replace("/solr", "/api"); - String reqUrl = url + "/node/files" + path + "?meta=true&wt=javabin&omitHeader=true"; + String baseUrl = + coreContainer.getZkController().getZkStateReader().getBaseUrlV2ForNodeName(liveNode); + String reqUrl = baseUrl + "/node/files" + path + "?meta=true&wt=javabin&omitHeader=true"; boolean nodeHasBlob = false; Object nl = Utils.executeGET( @@ -362,8 +360,8 @@ private void distribute(FileInfo info) { try { for (String node : nodes) { String baseUrl = - coreContainer.getZkController().getZkStateReader().getBaseUrlForNodeName(node); - String url = baseUrl.replace("/solr", "/api") + "/node/files" + info.path + "?getFrom="; + coreContainer.getZkController().getZkStateReader().getBaseUrlV2ForNodeName(node); + String url = baseUrl + "/node/files" + info.path + "?getFrom="; if (i < FETCHFROM_SRC) { // this is to protect very large clusters from overwhelming a single node // the first FETCHFROM_SRC nodes will be asked to fetch from this node. @@ -502,8 +500,8 @@ public void delete(String path) { HttpClient client = coreContainer.getUpdateShardHandler().getDefaultHttpClient(); for (String node : nodes) { String baseUrl = - coreContainer.getZkController().getZkStateReader().getBaseUrlForNodeName(node); - String url = baseUrl.replace("/solr", "/api") + "/node/files" + path; + coreContainer.getZkController().getZkStateReader().getBaseUrlV2ForNodeName(node); + String url = baseUrl + "/node/files" + path; HttpDelete del = new HttpDelete(url); // invoke delete command on all nodes asynchronously coreContainer.runAsync(() -> Utils.executeHttpMethod(client, url, null, del)); diff --git a/solr/core/src/java/org/apache/solr/filestore/FileStoreAPI.java b/solr/core/src/java/org/apache/solr/filestore/FileStoreAPI.java index 52dee0f5e9a..dc420d37b31 100644 --- a/solr/core/src/java/org/apache/solr/filestore/FileStoreAPI.java +++ b/solr/core/src/java/org/apache/solr/filestore/FileStoreAPI.java @@ -45,7 +45,6 @@ import org.apache.solr.common.util.ContentStream; import org.apache.solr.common.util.StrUtils; import org.apache.solr.common.util.Utils; -import org.apache.solr.core.BlobRepository; import org.apache.solr.core.CoreContainer; import org.apache.solr.core.SolrCore; import org.apache.solr.pkg.PackageAPI; @@ -84,7 +83,7 @@ public ArrayList shuffledNodes() { coreContainer.getZkController().getZkStateReader().getClusterState().getLiveNodes(); ArrayList l = new ArrayList<>(liveNodes); l.remove(coreContainer.getZkController().getNodeName()); - Collections.shuffle(l, BlobRepository.RANDOM); + Collections.shuffle(l, Utils.RANDOM); return l; } diff --git a/solr/core/src/java/org/apache/solr/handler/IndexFetcher.java b/solr/core/src/java/org/apache/solr/handler/IndexFetcher.java index 69a2ddda8bb..eac48745c30 100644 --- a/solr/core/src/java/org/apache/solr/handler/IndexFetcher.java +++ b/solr/core/src/java/org/apache/solr/handler/IndexFetcher.java @@ -269,8 +269,6 @@ private SolrClient createSolrClient( Http2SolrClient httpClient = new Http2SolrClient.Builder(leaderBaseUrl) .withHttpClient(updateShardHandler.getRecoveryOnlyHttpClient()) - .withListenerFactory( - updateShardHandler.getRecoveryOnlyHttpClient().getListenerFactory()) .withBasicAuthCredentials(httpBasicAuthUser, httpBasicAuthPassword) .withIdleTimeout(soTimeout, TimeUnit.MILLISECONDS) .withConnectionTimeout(connTimeout, TimeUnit.MILLISECONDS) diff --git a/solr/core/src/java/org/apache/solr/handler/component/QueryComponent.java b/solr/core/src/java/org/apache/solr/handler/component/QueryComponent.java index 4bc2a867ab2..a421dcc42df 100644 --- a/solr/core/src/java/org/apache/solr/handler/component/QueryComponent.java +++ b/solr/core/src/java/org/apache/solr/handler/component/QueryComponent.java @@ -89,7 +89,6 @@ import org.apache.solr.search.QueryResult; import org.apache.solr.search.QueryUtils; import org.apache.solr.search.RankQuery; -import org.apache.solr.search.ReRankQParserPlugin; import org.apache.solr.search.ReturnFields; import org.apache.solr.search.SolrIndexSearcher; import org.apache.solr.search.SolrReturnFields; @@ -782,7 +781,6 @@ protected void createMainQuery(ResponseBuilder rb) { boolean distribSinglePass = rb.req.getParams().getBool(ShardParams.DISTRIB_SINGLE_PASS, false); if (distribSinglePass - || singlePassExplain(rb.req.getParams()) || (fields != null && fields.wantsField(keyFieldName) && fields.getRequestedFieldNames() != null @@ -864,36 +862,6 @@ protected void createMainQuery(ResponseBuilder rb) { rb.addRequest(this, sreq); } - private boolean singlePassExplain(SolrParams params) { - - /* - * Currently there is only one explain that requires a single pass - * and that is the reRank when scaling is used. - */ - - String rankQuery = params.get(CommonParams.RQ); - if (rankQuery != null) { - if (rankQuery.contains(ReRankQParserPlugin.RERANK_MAIN_SCALE) - || rankQuery.contains(ReRankQParserPlugin.RERANK_SCALE)) { - boolean debugQuery = params.getBool(CommonParams.DEBUG_QUERY, false); - if (debugQuery) { - return true; - } else { - String[] debugParams = params.getParams(CommonParams.DEBUG); - if (debugParams != null) { - for (String debugParam : debugParams) { - if (debugParam.equals("true")) { - return true; - } - } - } - } - } - } - - return false; - } - protected boolean addFL(StringBuilder fl, String field, boolean additionalAdded) { if (additionalAdded) fl.append(","); fl.append(field); diff --git a/solr/core/src/java/org/apache/solr/handler/loader/JsonLoader.java b/solr/core/src/java/org/apache/solr/handler/loader/JsonLoader.java index 82231d098cb..48e095a092c 100644 --- a/solr/core/src/java/org/apache/solr/handler/loader/JsonLoader.java +++ b/solr/core/src/java/org/apache/solr/handler/loader/JsonLoader.java @@ -29,7 +29,6 @@ import java.lang.invoke.MethodHandles; import java.util.ArrayList; import java.util.Arrays; -import java.util.Collections; import java.util.Iterator; import java.util.LinkedHashMap; import java.util.List; @@ -43,6 +42,7 @@ import org.apache.solr.common.params.CommonParams; import org.apache.solr.common.params.SolrParams; import org.apache.solr.common.params.UpdateParams; +import org.apache.solr.common.util.CollectionUtil; import org.apache.solr.common.util.ContentStream; import org.apache.solr.common.util.JsonRecordReader; import org.apache.solr.common.util.StrUtils; @@ -50,6 +50,7 @@ import org.apache.solr.handler.UpdateRequestHandler; import org.apache.solr.request.SolrQueryRequest; import org.apache.solr.response.SolrQueryResponse; +import org.apache.solr.schema.IndexSchema; import org.apache.solr.schema.SchemaField; import org.apache.solr.update.AddUpdateCommand; import org.apache.solr.update.CommitUpdateCommand; @@ -657,21 +658,31 @@ private Object parseObjectFieldValue(int ev, String fieldName) throws IOExceptio // Is this a child doc (true) or a partial update (false) if (isChildDoc(extendedSolrDocument)) { return extendedSolrDocument; - } else { // partial update - assert extendedSolrDocument.size() == 1; - final SolrInputField pair = extendedSolrDocument.iterator().next(); - return Collections.singletonMap(pair.getName(), pair.getValue()); + } else { // partial update: can include multiple modifiers (e.g. 'add', 'remove') + Map map = CollectionUtil.newLinkedHashMap(extendedSolrDocument.size()); + for (SolrInputField inputField : extendedSolrDocument) { + map.put(inputField.getName(), inputField.getValue()); + } + return map; } } /** Is this a child doc (true) or a partial update (false)? */ private boolean isChildDoc(SolrInputDocument extendedFieldValue) { - if (extendedFieldValue.size() != 1) { + IndexSchema schema = req.getSchema(); + // If schema doesn't support child docs, return false immediately, which + // allows people to do atomic updates with multiple modifiers (like 'add' + // and 'remove' for a single doc) and to do single-modifier updates for a + // field with a name like 'set' that is defined in the schema, both of + // which would otherwise fail. + if (!schema.isUsableForChildDocs()) { + return false; + } else if (extendedFieldValue.size() != 1) { return true; } // if the only key is a field in the schema, assume it's a child doc final String fieldName = extendedFieldValue.iterator().next().getName(); - return req.getSchema().getFieldOrNull(fieldName) != null; + return schema.getFieldOrNull(fieldName) != null; // otherwise, assume it's "set" or some other verb for a partial update. // NOTE: it's fundamentally ambiguous with JSON; this is a best effort try. } diff --git a/solr/core/src/java/org/apache/solr/packagemanager/PackageUtils.java b/solr/core/src/java/org/apache/solr/packagemanager/PackageUtils.java index d6e765b0d4a..c0d50dd85e2 100644 --- a/solr/core/src/java/org/apache/solr/packagemanager/PackageUtils.java +++ b/solr/core/src/java/org/apache/solr/packagemanager/PackageUtils.java @@ -48,7 +48,6 @@ import org.apache.solr.common.params.SolrParams; import org.apache.solr.common.util.NamedList; import org.apache.solr.common.util.Utils; -import org.apache.solr.core.BlobRepository; import org.apache.solr.filestore.DistribFileStore; import org.apache.solr.filestore.FileStoreAPI; import org.apache.solr.packagemanager.SolrPackage.Manifest; @@ -209,7 +208,7 @@ public static Manifest fetchManifest( NamedList response = solrClient.request(request); String manifestJson = (String) response.get("response"); String calculatedSHA512 = - BlobRepository.sha512Digest(ByteBuffer.wrap(manifestJson.getBytes(StandardCharsets.UTF_8))); + Utils.sha512Digest(ByteBuffer.wrap(manifestJson.getBytes(StandardCharsets.UTF_8))); if (expectedSHA512.equals(calculatedSHA512) == false) { throw new SolrException( ErrorCode.UNAUTHORIZED, diff --git a/solr/core/src/java/org/apache/solr/packagemanager/RepositoryManager.java b/solr/core/src/java/org/apache/solr/packagemanager/RepositoryManager.java index 109468f854c..d92d027d1c0 100644 --- a/solr/core/src/java/org/apache/solr/packagemanager/RepositoryManager.java +++ b/solr/core/src/java/org/apache/solr/packagemanager/RepositoryManager.java @@ -50,7 +50,7 @@ import org.apache.solr.common.cloud.SolrZkClient; import org.apache.solr.common.params.ModifiableSolrParams; import org.apache.solr.common.util.NamedList; -import org.apache.solr.core.BlobRepository; +import org.apache.solr.common.util.Utils; import org.apache.solr.filestore.FileStoreAPI; import org.apache.solr.packagemanager.SolrPackage.Artifact; import org.apache.solr.packagemanager.SolrPackage.SolrPackageRelease; @@ -193,8 +193,7 @@ private boolean installPackage(String packageName, String version) throws SolrEx } String manifestJson = getMapper().writeValueAsString(release.manifest); String manifestSHA512 = - BlobRepository.sha512Digest( - ByteBuffer.wrap(manifestJson.getBytes(StandardCharsets.UTF_8))); + Utils.sha512Digest(ByteBuffer.wrap(manifestJson.getBytes(StandardCharsets.UTF_8))); PackageUtils.postFile( solrClient, ByteBuffer.wrap(manifestJson.getBytes(StandardCharsets.UTF_8)), diff --git a/solr/core/src/java/org/apache/solr/pkg/PackageAPI.java b/solr/core/src/java/org/apache/solr/pkg/PackageAPI.java index 6e264421684..7e2c5a6f73e 100644 --- a/solr/core/src/java/org/apache/solr/pkg/PackageAPI.java +++ b/solr/core/src/java/org/apache/solr/pkg/PackageAPI.java @@ -257,11 +257,7 @@ public void refresh(PayloadObj payload) { for (String s : coreContainer.getFileStoreAPI().shuffledNodes()) { Utils.executeGET( coreContainer.getUpdateShardHandler().getDefaultHttpClient(), - coreContainer - .getZkController() - .zkStateReader - .getBaseUrlForNodeName(s) - .replace("/solr", "/api") + coreContainer.getZkController().zkStateReader.getBaseUrlV2ForNodeName(s) + "/cluster/package?wt=javabin&omitHeader=true&refreshPackage=" + p, Utils.JAVABINCONSUMER); @@ -429,11 +425,7 @@ void notifyAllNodesToSync(int expected) { for (String s : coreContainer.getFileStoreAPI().shuffledNodes()) { Utils.executeGET( coreContainer.getUpdateShardHandler().getDefaultHttpClient(), - coreContainer - .getZkController() - .zkStateReader - .getBaseUrlForNodeName(s) - .replace("/solr", "/api") + coreContainer.getZkController().zkStateReader.getBaseUrlV2ForNodeName(s) + "/cluster/package?wt=javabin&omitHeader=true&expectedVersion" + expected, Utils.JAVABINCONSUMER); diff --git a/solr/core/src/java/org/apache/solr/search/ReRankQParserPlugin.java b/solr/core/src/java/org/apache/solr/search/ReRankQParserPlugin.java index e9597c3dd86..5bf5ce4067d 100644 --- a/solr/core/src/java/org/apache/solr/search/ReRankQParserPlugin.java +++ b/solr/core/src/java/org/apache/solr/search/ReRankQParserPlugin.java @@ -25,7 +25,9 @@ import org.apache.solr.common.params.CommonParams; import org.apache.solr.common.params.SolrParams; import org.apache.solr.common.util.StrUtils; +import org.apache.solr.handler.component.ResponseBuilder; import org.apache.solr.request.SolrQueryRequest; +import org.apache.solr.request.SolrRequestInfo; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -63,6 +65,40 @@ public QParser createParser( private static class ReRankQParser extends QParser { + private boolean isExplainResults() { + final SolrRequestInfo ri = SolrRequestInfo.getRequestInfo(); + if (null != ri) { + final ResponseBuilder rb = ri.getResponseBuilder(); + if (null != rb) { + return rb.isDebugResults(); + } + } + + // HACK: The code below should not be used. It is preserved for backcompat + // on the slim remote chance that someone is using ReRankQParserPlugin + // w/o using SearchHandler+ResponseBuilder + // + // (It's also wrong, and doesn't account for thigns like debug=true + // or debug=all ... but as stated: it's for esoteric backcompat purposes + // only, so we're not going to change it and start returning "true" + // if existing code doesn't expect it + + boolean debugQuery = params.getBool(CommonParams.DEBUG_QUERY, false); + + if (!debugQuery) { + String[] debugParams = params.getParams(CommonParams.DEBUG); + if (debugParams != null) { + for (String debugParam : debugParams) { + if ("true".equals(debugParam)) { + debugQuery = true; + break; + } + } + } + } + return debugQuery; + } + public ReRankQParser( String query, SolrParams localParams, SolrParams params, SolrQueryRequest req) { super(query, localParams, params, req); @@ -88,19 +124,8 @@ public Query parse() throws SyntaxError { String mainScale = localParams.get(RERANK_MAIN_SCALE); String reRankScale = localParams.get(RERANK_SCALE); - boolean debugQuery = params.getBool(CommonParams.DEBUG_QUERY, false); - if (!debugQuery) { - String[] debugParams = params.getParams(CommonParams.DEBUG); - if (debugParams != null) { - for (String debugParam : debugParams) { - if ("true".equals(debugParam)) { - debugQuery = true; - break; - } - } - } - } + final boolean explainResults = isExplainResults(); double reRankScaleWeight = reRankWeight; @@ -111,7 +136,7 @@ public Query parse() throws SyntaxError { reRankScaleWeight, reRankOperator, new ReRankQueryRescorer(reRankQuery, 1, ReRankOperator.REPLACE), - debugQuery); + explainResults); if (reRankScaler.scaleScores()) { // Scaler applies the weighting instead of the rescorer @@ -119,7 +144,7 @@ public Query parse() throws SyntaxError { } return new ReRankQuery( - reRankQuery, reRankDocs, reRankWeight, reRankOperator, reRankScaler, debugQuery); + reRankQuery, reRankDocs, reRankWeight, reRankOperator, reRankScaler, explainResults); } } @@ -165,7 +190,7 @@ protected float combine( private static final class ReRankQuery extends AbstractReRankQuery { private final Query reRankQuery; private final double reRankWeight; - private final boolean debugQuery; + private final boolean explainResults; @Override public int hashCode() { @@ -198,7 +223,7 @@ public ReRankQuery( double reRankWeight, ReRankOperator reRankOperator, ReRankScaler reRankScaler, - boolean debugQuery) { + boolean explainResults) { super( defaultQuery, reRankDocs, @@ -207,7 +232,7 @@ public ReRankQuery( reRankOperator); this.reRankQuery = reRankQuery; this.reRankWeight = reRankWeight; - this.debugQuery = debugQuery; + this.explainResults = explainResults; } @Override @@ -246,13 +271,13 @@ public String toString(String s) { @Override protected Query rewrite(Query rewrittenMainQuery) throws IOException { return new ReRankQuery( - reRankQuery, reRankDocs, reRankWeight, reRankOperator, reRankScaler, debugQuery) + reRankQuery, reRankDocs, reRankWeight, reRankOperator, reRankScaler, explainResults) .wrap(rewrittenMainQuery); } @Override public boolean getCache() { - if (reRankScaler.scaleScores() && debugQuery) { + if (reRankScaler.scaleScores() && explainResults) { // Caching breaks explain when reRankScaling is used. return false; } else { diff --git a/solr/core/src/java/org/apache/solr/search/ReRankScaler.java b/solr/core/src/java/org/apache/solr/search/ReRankScaler.java index 686faa4ae1d..8410467ddf8 100644 --- a/solr/core/src/java/org/apache/solr/search/ReRankScaler.java +++ b/solr/core/src/java/org/apache/solr/search/ReRankScaler.java @@ -32,7 +32,7 @@ public class ReRankScaler { protected int mainQueryMax = -1; protected int reRankQueryMin = -1; protected int reRankQueryMax = -1; - protected boolean debugQuery; + protected boolean explainResults; protected ReRankOperator reRankOperator; protected ReRankScalerExplain reRankScalerExplain; private QueryRescorer replaceRescorer; @@ -45,11 +45,11 @@ public ReRankScaler( double reRankScaleWeight, ReRankOperator reRankOperator, QueryRescorer replaceRescorer, - boolean debugQuery) + boolean explainResults) throws SyntaxError { this.reRankScaleWeight = reRankScaleWeight; - this.debugQuery = debugQuery; + this.explainResults = explainResults; this.reRankScalerExplain = new ReRankScalerExplain(mainScale, reRankScale); this.replaceRescorer = replaceRescorer; if (reRankOperator != ReRankOperator.ADD @@ -171,12 +171,12 @@ public ScoreDoc[] scaleScores(ScoreDoc[] originalDocs, ScoreDoc[] rescoredDocs, scaledOriginalScoreMap = originalScoreMap; } - this.reRankSet = debugQuery ? new HashSet<>() : null; + this.reRankSet = explainResults ? new HashSet<>() : null; for (int i = 0; i < howMany; i++) { ScoreDoc rescoredDoc = rescoredDocs[i]; int doc = rescoredDoc.doc; - if (debugQuery) { + if (explainResults) { reRankSet.add(doc); } float score = rescoredDoc.score; @@ -345,7 +345,14 @@ public Explanation explain( int doc, Explanation mainQueryExplain, Explanation reRankQueryExplain) { float reRankScore = reRankQueryExplain.getDetails()[1].getValue().floatValue(); float mainScore = mainQueryExplain.getValue().floatValue(); - if (reRankSet.contains(doc)) { + if (null == reRankSet) { + // we don't have the data needed to accurately report scaling, + // probably due to distributed request + return Explanation.match( + reRankScore, + "ReRank Scaling effects unkown, consider using distrib.singlePass=true (see https://issues.apache.org/jira/browse/SOLR-17299)", + reRankQueryExplain); + } else if (reRankSet.contains(doc)) { if (scaleMainScores() && scaleReRankScores()) { if (reRankScore > 0) { MinMaxExplain mainScaleExplain = reRankScalerExplain.getMainScaleExplain(); diff --git a/solr/core/src/java/org/apache/solr/servlet/CoordinatorHttpSolrCall.java b/solr/core/src/java/org/apache/solr/servlet/CoordinatorHttpSolrCall.java index aa4a0e2fd75..ec00107b717 100644 --- a/solr/core/src/java/org/apache/solr/servlet/CoordinatorHttpSolrCall.java +++ b/solr/core/src/java/org/apache/solr/servlet/CoordinatorHttpSolrCall.java @@ -163,8 +163,12 @@ public static SolrCore getCore( 10, TimeUnit.SECONDS, docCollection -> { - for (Replica nodeNameSyntheticReplica : - docCollection.getReplicas(solrCall.cores.getZkController().getNodeName())) { + List replicas = + docCollection.getReplicas(solrCall.cores.getZkController().getNodeName()); + if (replicas == null || replicas.isEmpty()) { + return false; + } + for (Replica nodeNameSyntheticReplica : replicas) { if (nodeNameSyntheticReplica.getState() == Replica.State.ACTIVE) { return true; } diff --git a/solr/core/src/java/org/apache/solr/update/PeerSyncWithLeader.java b/solr/core/src/java/org/apache/solr/update/PeerSyncWithLeader.java index c2d98d41ac0..ad7c76d752a 100644 --- a/solr/core/src/java/org/apache/solr/update/PeerSyncWithLeader.java +++ b/solr/core/src/java/org/apache/solr/update/PeerSyncWithLeader.java @@ -28,11 +28,9 @@ import java.lang.invoke.MethodHandles; import java.util.List; import java.util.Set; -import org.apache.http.client.HttpClient; import org.apache.solr.client.solrj.SolrClient; import org.apache.solr.client.solrj.SolrRequest; import org.apache.solr.client.solrj.SolrServerException; -import org.apache.solr.client.solrj.impl.HttpSolrClient; import org.apache.solr.client.solrj.request.QueryRequest; import org.apache.solr.client.solrj.response.QueryResponse; import org.apache.solr.cloud.ZkController; @@ -56,7 +54,9 @@ public class PeerSyncWithLeader implements SolrMetricProducer { private UpdateHandler uhandler; private UpdateLog ulog; - private SolrClient clientToLeader; + private final SolrClient clientToLeader; + private final String coreName; + private final String leaderBaseUrl; private boolean doFingerprint; @@ -79,15 +79,10 @@ public PeerSyncWithLeader(SolrCore core, String leaderUrl, int nUpdates) { this.doFingerprint = !"true".equals(System.getProperty("solr.disableFingerprint")); this.uhandler = core.getUpdateHandler(); this.ulog = uhandler.getUpdateLog(); - HttpClient httpClient = core.getCoreContainer().getUpdateShardHandler().getDefaultHttpClient(); - final var leaderBaseUrl = URLUtil.extractBaseUrl(leaderUrl); - final var coreName = URLUtil.extractCoreFromCoreUrl(leaderUrl); - this.clientToLeader = - new HttpSolrClient.Builder(leaderBaseUrl) - .withDefaultCollection(coreName) - .withHttpClient(httpClient) - .build(); + leaderBaseUrl = URLUtil.extractBaseUrl(leaderUrl); + coreName = URLUtil.extractCoreFromCoreUrl(leaderUrl); + clientToLeader = core.getCoreContainer().getUpdateShardHandler().getRecoveryOnlyHttpClient(); this.updater = new PeerSync.Updater(msg(), core); @@ -201,11 +196,6 @@ public PeerSync.PeerSyncResult sync(List startingVersions) { if (timerContext != null) { timerContext.close(); } - try { - clientToLeader.close(); - } catch (IOException e) { - log.warn("{} unable to close client to leader", msg(), e); - } } } @@ -343,7 +333,9 @@ private boolean handleUpdates( private NamedList request(ModifiableSolrParams params, String onFail) { try { - QueryResponse rsp = new QueryRequest(params, SolrRequest.METHOD.POST).process(clientToLeader); + QueryRequest request = new QueryRequest(params, SolrRequest.METHOD.POST); + request.setBasePath(leaderBaseUrl); + QueryResponse rsp = request.process(clientToLeader, coreName); Exception exception = rsp.getException(); if (exception != null) { throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, onFail); diff --git a/solr/core/src/test-files/solr/collection1/conf/atomic-update-json-test.xml b/solr/core/src/test-files/solr/collection1/conf/atomic-update-json-test.xml new file mode 100644 index 00000000000..820c13e5f93 --- /dev/null +++ b/solr/core/src/test-files/solr/collection1/conf/atomic-update-json-test.xml @@ -0,0 +1,32 @@ + + + + + + + + + + + + + id + diff --git a/solr/core/src/test-files/solr/configsets/resource-sharing/schema.xml b/solr/core/src/test-files/solr/configsets/resource-sharing/schema.xml deleted file mode 100644 index 287d4fe0149..00000000000 --- a/solr/core/src/test-files/solr/configsets/resource-sharing/schema.xml +++ /dev/null @@ -1,21 +0,0 @@ - - - - - - diff --git a/solr/core/src/test-files/solr/configsets/resource-sharing/solrconfig.xml b/solr/core/src/test-files/solr/configsets/resource-sharing/solrconfig.xml deleted file mode 100644 index 1dd92feef2e..00000000000 --- a/solr/core/src/test-files/solr/configsets/resource-sharing/solrconfig.xml +++ /dev/null @@ -1,51 +0,0 @@ - - - - - - - - - ${solr.data.dir:} - - - - - ${tests.luceneMatchVersion:LATEST} - - - - ${solr.commitwithin.softcommit:true} - - - - - - - - explicit - true - text - - - testComponent - - - - diff --git a/solr/core/src/test/org/apache/solr/cli/CreateToolTest.java b/solr/core/src/test/org/apache/solr/cli/CreateToolTest.java index a4801c99fb0..4ff1028520e 100644 --- a/solr/core/src/test/org/apache/solr/cli/CreateToolTest.java +++ b/solr/core/src/test/org/apache/solr/cli/CreateToolTest.java @@ -17,52 +17,24 @@ package org.apache.solr.cli; -import static java.util.Collections.singletonList; -import static java.util.Collections.singletonMap; import static org.apache.solr.cli.SolrCLI.findTool; import static org.apache.solr.cli.SolrCLI.parseCmdLine; -import static org.apache.solr.security.Sha256AuthenticationProvider.getSaltedHashedValue; -import java.util.Map; import org.apache.commons.cli.CommandLine; import org.apache.solr.cloud.SolrCloudTestCase; -import org.apache.solr.common.util.Utils; -import org.apache.solr.security.BasicAuthPlugin; -import org.apache.solr.security.RuleBasedAuthorizationPlugin; +import org.apache.solr.util.SecurityJson; import org.junit.BeforeClass; import org.junit.Test; public class CreateToolTest extends SolrCloudTestCase { - private static final String USER = "solr"; - private static final String PASS = "SolrRocksAgain"; private static final String collectionName = "testCreateCollectionWithBasicAuth"; @BeforeClass public static void setupClusterWithSecurityEnabled() throws Exception { - final String SECURITY_JSON = - Utils.toJSONString( - Map.of( - "authorization", - Map.of( - "class", - RuleBasedAuthorizationPlugin.class.getName(), - "user-role", - singletonMap(USER, "admin"), - "permissions", - singletonList(Map.of("name", "all", "role", "admin"))), - "authentication", - Map.of( - "class", - BasicAuthPlugin.class.getName(), - "blockUnknown", - true, - "credentials", - singletonMap(USER, getSaltedHashedValue(PASS))))); - configureCluster(2) .addConfig("conf", configset("cloud-minimal")) - .withSecurityJson(SECURITY_JSON) + .withSecurityJson(SecurityJson.SIMPLE) .configure(); } @@ -78,7 +50,7 @@ public void testCreateCollectionWithBasicAuth() throws Exception { "-zkHost", cluster.getZkClient().getZkServerAddress(), "-credentials", - USER + ":" + PASS, + SecurityJson.USER_PASS, "-verbose" }; diff --git a/solr/core/src/test/org/apache/solr/cli/DeleteToolTest.java b/solr/core/src/test/org/apache/solr/cli/DeleteToolTest.java index 808f6171971..7c781f2e636 100644 --- a/solr/core/src/test/org/apache/solr/cli/DeleteToolTest.java +++ b/solr/core/src/test/org/apache/solr/cli/DeleteToolTest.java @@ -17,59 +17,30 @@ package org.apache.solr.cli; -import static java.util.Collections.singletonList; -import static java.util.Collections.singletonMap; import static org.apache.solr.cli.SolrCLI.findTool; import static org.apache.solr.cli.SolrCLI.parseCmdLine; -import static org.apache.solr.security.Sha256AuthenticationProvider.getSaltedHashedValue; -import java.util.Map; import org.apache.commons.cli.CommandLine; import org.apache.solr.client.solrj.SolrRequest; import org.apache.solr.client.solrj.SolrResponse; import org.apache.solr.client.solrj.request.CollectionAdminRequest; import org.apache.solr.cloud.SolrCloudTestCase; -import org.apache.solr.common.util.Utils; -import org.apache.solr.security.BasicAuthPlugin; -import org.apache.solr.security.RuleBasedAuthorizationPlugin; +import org.apache.solr.util.SecurityJson; import org.junit.BeforeClass; import org.junit.Test; public class DeleteToolTest extends SolrCloudTestCase { - private static final String USER = "solr"; - private static final String PASS = "SolrRocksAgain"; - @BeforeClass public static void setupClusterWithSecurityEnabled() throws Exception { - final String SECURITY_JSON = - Utils.toJSONString( - Map.of( - "authorization", - Map.of( - "class", - RuleBasedAuthorizationPlugin.class.getName(), - "user-role", - singletonMap(USER, "admin"), - "permissions", - singletonList(Map.of("name", "all", "role", "admin"))), - "authentication", - Map.of( - "class", - BasicAuthPlugin.class.getName(), - "blockUnknown", - true, - "credentials", - singletonMap(USER, getSaltedHashedValue(PASS))))); - configureCluster(2) .addConfig("conf", configset("cloud-minimal")) - .withSecurityJson(SECURITY_JSON) + .withSecurityJson(SecurityJson.SIMPLE) .configure(); } private > T withBasicAuth(T req) { - req.setBasicAuthCredentials(USER, PASS); + req.setBasicAuthCredentials(SecurityJson.USER, SecurityJson.PASS); return req; } @@ -94,7 +65,7 @@ public void testDeleteCollectionWithBasicAuth() throws Exception { "-zkHost", cluster.getZkClient().getZkServerAddress(), "-credentials", - USER + ":" + PASS, + SecurityJson.USER_PASS, "-verbose" }; assertEquals(0, runTool(args)); diff --git a/solr/core/src/test/org/apache/solr/cli/PackageToolTest.java b/solr/core/src/test/org/apache/solr/cli/PackageToolTest.java index dea89655772..c638d980101 100644 --- a/solr/core/src/test/org/apache/solr/cli/PackageToolTest.java +++ b/solr/core/src/test/org/apache/solr/cli/PackageToolTest.java @@ -17,15 +17,10 @@ package org.apache.solr.cli; -import static java.util.Collections.singletonList; -import static java.util.Collections.singletonMap; -import static org.apache.solr.security.Sha256AuthenticationProvider.getSaltedHashedValue; - import java.io.StringReader; import java.lang.invoke.MethodHandles; import java.util.Arrays; import java.util.List; -import java.util.Map; import java.util.Objects; import org.apache.solr.client.solrj.SolrRequest; import org.apache.solr.client.solrj.SolrResponse; @@ -34,9 +29,8 @@ import org.apache.solr.common.LinkedHashMapWriter; import org.apache.solr.common.util.StrUtils; import org.apache.solr.common.util.Utils; -import org.apache.solr.security.BasicAuthPlugin; -import org.apache.solr.security.RuleBasedAuthorizationPlugin; import org.apache.solr.util.LogLevel; +import org.apache.solr.util.SecurityJson; import org.eclipse.jetty.server.Handler; import org.eclipse.jetty.server.Server; import org.eclipse.jetty.server.ServerConnector; @@ -52,8 +46,6 @@ @LogLevel("org.apache=INFO") public class PackageToolTest extends SolrCloudTestCase { - private static final String USER = "solr"; - private static final String PASS = "SolrRocksAgain"; // Note for those who want to modify the jar files used in the packages used in this test: // You need to re-sign the jars for install step, as follows: @@ -71,32 +63,12 @@ public class PackageToolTest extends SolrCloudTestCase { public static void setupClusterWithSecurityEnabled() throws Exception { System.setProperty("enable.packages", "true"); - final String SECURITY_JSON = - Utils.toJSONString( - Map.of( - "authorization", - Map.of( - "class", - RuleBasedAuthorizationPlugin.class.getName(), - "user-role", - singletonMap(USER, "admin"), - "permissions", - singletonList(Map.of("name", "all", "role", "admin"))), - "authentication", - Map.of( - "class", - BasicAuthPlugin.class.getName(), - "blockUnknown", - true, - "credentials", - singletonMap(USER, getSaltedHashedValue(PASS))))); - configureCluster(2) .addConfig( "conf1", TEST_PATH().resolve("configsets").resolve("cloud-minimal").resolve("conf")) .addConfig( "conf3", TEST_PATH().resolve("configsets").resolve("cloud-minimal").resolve("conf")) - .withSecurityJson(SECURITY_JSON) + .withSecurityJson(SecurityJson.SIMPLE) .configure(); repositoryServer = @@ -116,7 +88,7 @@ public static void teardown() throws Exception { } private > T withBasicAuth(T req) { - req.setBasicAuthCredentials(USER, PASS); + req.setBasicAuthCredentials(SecurityJson.USER, SecurityJson.PASS); return req; } @@ -128,7 +100,9 @@ public void testPackageTool() throws Exception { run( tool, - new String[] {"-solrUrl", solrUrl, "list-installed", "-credentials", USER + ":" + PASS}); + new String[] { + "-solrUrl", solrUrl, "list-installed", "-credentials", SecurityJson.USER_PASS + }); run( tool, @@ -136,27 +110,36 @@ public void testPackageTool() throws Exception { "-solrUrl", solrUrl, "-credentials", - USER + ":" + PASS, + SecurityJson.USER_PASS, "add-repo", "fullstory", "http://localhost:" + repositoryServer.getPort(), "-credentials", - USER + ":" + PASS + SecurityJson.USER_PASS }); run( tool, - new String[] {"-solrUrl", solrUrl, "list-available", "-credentials", USER + ":" + PASS}); + new String[] { + "-solrUrl", solrUrl, "list-available", "-credentials", SecurityJson.USER_PASS + }); run( tool, new String[] { - "-solrUrl", solrUrl, "install", "question-answer:1.0.0", "-credentials", USER + ":" + PASS + "-solrUrl", + solrUrl, + "install", + "question-answer:1.0.0", + "-credentials", + SecurityJson.USER_PASS }); run( tool, - new String[] {"-solrUrl", solrUrl, "list-installed", "-credentials", USER + ":" + PASS}); + new String[] { + "-solrUrl", solrUrl, "list-installed", "-credentials", SecurityJson.USER_PASS + }); withBasicAuth(CollectionAdminRequest.createCollection("abc", "conf1", 1, 1)) .processAndWait(cluster.getSolrClient(), 10); @@ -168,7 +151,12 @@ public void testPackageTool() throws Exception { run( tool, new String[] { - "-solrUrl", solrUrl, "list-deployed", "question-answer", "-credentials", USER + ":" + PASS + "-solrUrl", + solrUrl, + "list-deployed", + "question-answer", + "-credentials", + SecurityJson.USER_PASS }); run( @@ -184,20 +172,26 @@ public void testPackageTool() throws Exception { "-p", "RH-HANDLER-PATH=" + rhPath, "-credentials", - USER + ":" + PASS + SecurityJson.USER_PASS }); - assertPackageVersion("abc", "question-answer", "1.0.0", rhPath, "1.0.0", USER + ":" + PASS); + assertPackageVersion( + "abc", "question-answer", "1.0.0", rhPath, "1.0.0", SecurityJson.USER_PASS); run( tool, new String[] { - "-solrUrl", solrUrl, "list-deployed", "question-answer", "-credentials", USER + ":" + PASS + "-solrUrl", + solrUrl, + "list-deployed", + "question-answer", + "-credentials", + SecurityJson.USER_PASS }); run( tool, new String[] { - "-solrUrl", solrUrl, "list-deployed", "-c", "abc", "-credentials", USER + ":" + PASS + "-solrUrl", solrUrl, "list-deployed", "-c", "abc", "-credentials", SecurityJson.USER_PASS }); // Should we test the "auto-update to latest" functionality or the default explicit deploy @@ -219,25 +213,38 @@ public void testPackageTool() throws Exception { "-collections", "abc", "-credentials", - USER + ":" + PASS + SecurityJson.USER_PASS }); - assertPackageVersion("abc", "question-answer", "$LATEST", rhPath, "1.0.0", USER + ":" + PASS); + assertPackageVersion( + "abc", "question-answer", "$LATEST", rhPath, "1.0.0", SecurityJson.USER_PASS); run( tool, new String[] { - "-solrUrl", solrUrl, "install", "question-answer", "-credentials", USER + ":" + PASS + "-solrUrl", + solrUrl, + "install", + "question-answer", + "-credentials", + SecurityJson.USER_PASS }); - assertPackageVersion("abc", "question-answer", "$LATEST", rhPath, "1.1.0", USER + ":" + PASS); + assertPackageVersion( + "abc", "question-answer", "$LATEST", rhPath, "1.1.0", SecurityJson.USER_PASS); } else { log.info("Testing explicit deployment to a different/newer version"); run( tool, new String[] { - "-solrUrl", solrUrl, "install", "question-answer", "-credentials", USER + ":" + PASS + "-solrUrl", + solrUrl, + "install", + "question-answer", + "-credentials", + SecurityJson.USER_PASS }); - assertPackageVersion("abc", "question-answer", "1.0.0", rhPath, "1.0.0", USER + ":" + PASS); + assertPackageVersion( + "abc", "question-answer", "1.0.0", rhPath, "1.0.0", SecurityJson.USER_PASS); // even if parameters are not passed in, they should be picked up from previous deployment if (random().nextBoolean()) { @@ -255,7 +262,7 @@ public void testPackageTool() throws Exception { "-p", "RH-HANDLER-PATH=" + rhPath, "-credentials", - USER + ":" + PASS + SecurityJson.USER_PASS }); } else { run( @@ -270,10 +277,11 @@ public void testPackageTool() throws Exception { "-collections", "abc", "-credentials", - USER + ":" + PASS + SecurityJson.USER_PASS }); } - assertPackageVersion("abc", "question-answer", "1.1.0", rhPath, "1.1.0", USER + ":" + PASS); + assertPackageVersion( + "abc", "question-answer", "1.1.0", rhPath, "1.1.0", SecurityJson.USER_PASS); } log.info("Running undeploy..."); @@ -287,13 +295,18 @@ public void testPackageTool() throws Exception { "-collections", "abc", "-credentials", - USER + ":" + PASS + SecurityJson.USER_PASS }); run( tool, new String[] { - "-solrUrl", solrUrl, "list-deployed", "question-answer", "-credentials", USER + ":" + PASS + "-solrUrl", + solrUrl, + "list-deployed", + "question-answer", + "-credentials", + SecurityJson.USER_PASS }); } diff --git a/solr/core/src/test/org/apache/solr/cli/PostToolTest.java b/solr/core/src/test/org/apache/solr/cli/PostToolTest.java index c20249ad7fb..5d8b9720f6b 100644 --- a/solr/core/src/test/org/apache/solr/cli/PostToolTest.java +++ b/solr/core/src/test/org/apache/solr/cli/PostToolTest.java @@ -17,11 +17,8 @@ package org.apache.solr.cli; -import static java.util.Collections.singletonList; -import static java.util.Collections.singletonMap; import static org.apache.solr.cli.SolrCLI.findTool; import static org.apache.solr.cli.SolrCLI.parseCmdLine; -import static org.apache.solr.security.Sha256AuthenticationProvider.getSaltedHashedValue; import java.io.ByteArrayInputStream; import java.io.File; @@ -49,8 +46,7 @@ import org.apache.solr.cloud.SolrCloudTestCase; import org.apache.solr.common.util.EnvUtils; import org.apache.solr.common.util.Utils; -import org.apache.solr.security.BasicAuthPlugin; -import org.apache.solr.security.RuleBasedAuthorizationPlugin; +import org.apache.solr.util.SecurityJson; import org.junit.BeforeClass; import org.junit.Test; @@ -62,39 +58,16 @@ @SolrTestCaseJ4.SuppressSSL public class PostToolTest extends SolrCloudTestCase { - private static final String USER = "solr"; - private static final String PASS = "SolrRocksAgain"; - @BeforeClass public static void setupClusterWithSecurityEnabled() throws Exception { - final String SECURITY_JSON = - Utils.toJSONString( - Map.of( - "authorization", - Map.of( - "class", - RuleBasedAuthorizationPlugin.class.getName(), - "user-role", - singletonMap(USER, "admin"), - "permissions", - singletonList(Map.of("name", "all", "role", "admin"))), - "authentication", - Map.of( - "class", - BasicAuthPlugin.class.getName(), - "blockUnknown", - true, - "credentials", - singletonMap(USER, getSaltedHashedValue(PASS))))); - configureCluster(2) .addConfig("conf1", configset("cloud-minimal")) - .withSecurityJson(SECURITY_JSON) + .withSecurityJson(SecurityJson.SIMPLE) .configure(); } private > T withBasicAuth(T req) { - req.setBasicAuthCredentials(USER, PASS); + req.setBasicAuthCredentials(SecurityJson.USER, SecurityJson.PASS); return req; } @@ -116,7 +89,7 @@ public void testBasicRun() throws Exception { "--solr-update-url", cluster.getJettySolrRunner(0).getBaseUrl() + "/" + collection + "/update", "--credentials", - USER + ":" + PASS, + SecurityJson.USER_PASS, jsonDoc.getAbsolutePath() }; assertEquals(0, runTool(args)); @@ -154,7 +127,7 @@ public void testRunWithCollectionParam() throws Exception { fw.flush(); String[] args = { - "post", "-c", collection, "-credentials", USER + ":" + PASS, jsonDoc.getAbsolutePath() + "post", "-c", collection, "-credentials", SecurityJson.USER_PASS, jsonDoc.getAbsolutePath() }; assertEquals(0, runTool(args)); diff --git a/solr/core/src/test/org/apache/solr/cli/TestExportTool.java b/solr/core/src/test/org/apache/solr/cli/TestExportTool.java index 88b4e4a49d1..9ca2a36a6d4 100644 --- a/solr/core/src/test/org/apache/solr/cli/TestExportTool.java +++ b/solr/core/src/test/org/apache/solr/cli/TestExportTool.java @@ -17,11 +17,8 @@ package org.apache.solr.cli; -import static java.util.Collections.singletonList; -import static java.util.Collections.singletonMap; import static org.apache.solr.cli.SolrCLI.findTool; import static org.apache.solr.cli.SolrCLI.parseCmdLine; -import static org.apache.solr.security.Sha256AuthenticationProvider.getSaltedHashedValue; import java.io.File; import java.io.FileInputStream; @@ -50,9 +47,7 @@ import org.apache.solr.common.cloud.Slice; import org.apache.solr.common.util.FastInputStream; import org.apache.solr.common.util.JsonRecordReader; -import org.apache.solr.common.util.Utils; -import org.apache.solr.security.BasicAuthPlugin; -import org.apache.solr.security.RuleBasedAuthorizationPlugin; +import org.apache.solr.util.SecurityJson; import org.junit.Test; @SolrTestCaseJ4.SuppressSSL @@ -233,36 +228,14 @@ public void testVeryLargeCluster() throws Exception { @Test public void testWithBasicAuth() throws Exception { String COLLECTION_NAME = "secureCollection"; - String USER = "solr"; - String PASS = "SolrRocksAgain"; - final String SECURITY_JSON = - Utils.toJSONString( - Map.of( - "authorization", - Map.of( - "class", - RuleBasedAuthorizationPlugin.class.getName(), - "user-role", - singletonMap(USER, "admin"), - "permissions", - singletonList(Map.of("name", "all", "role", "admin"))), - "authentication", - Map.of( - "class", - BasicAuthPlugin.class.getName(), - "blockUnknown", - true, - "credentials", - singletonMap(USER, getSaltedHashedValue(PASS))))); - configureCluster(2) .addConfig("conf", configset("cloud-minimal")) - .withSecurityJson(SECURITY_JSON) + .withSecurityJson(SecurityJson.SIMPLE) .configure(); try { CollectionAdminRequest.createCollection(COLLECTION_NAME, "conf", 2, 1) - .setBasicAuthCredentials(USER, PASS) + .setBasicAuthCredentials(SecurityJson.USER, SecurityJson.PASS) .process(cluster.getSolrClient()); cluster.waitForActiveCollection(COLLECTION_NAME, 2, 2); @@ -273,7 +246,7 @@ public void testWithBasicAuth() throws Exception { "-url", cluster.getJettySolrRunner(0).getBaseUrl() + "/" + COLLECTION_NAME, "-credentials", - USER + ":" + PASS, + SecurityJson.USER_PASS, "-out", outFile.getAbsolutePath(), "-verbose" diff --git a/solr/core/src/test/org/apache/solr/cloud/AliasIntegrationTest.java b/solr/core/src/test/org/apache/solr/cloud/AliasIntegrationTest.java index c3a9e77f55d..3863043fc1a 100644 --- a/solr/core/src/test/org/apache/solr/cloud/AliasIntegrationTest.java +++ b/solr/core/src/test/org/apache/solr/cloud/AliasIntegrationTest.java @@ -238,9 +238,8 @@ public void testProperties() throws Exception { public void testModifyPropertiesV2() throws Exception { final String aliasName = getSaferTestName(); ZkStateReader zkStateReader = createColectionsAndAlias(aliasName); - final String baseUrl = - cluster.getRandomJetty(random()).getBaseUrl().toString().replace("/solr", ""); - String aliasApi = String.format(Locale.ENGLISH, "/api/aliases/%s/properties", aliasName); + final String baseUrl = cluster.getRandomJetty(random()).getBaseURLV2().toString(); + String aliasApi = String.format(Locale.ENGLISH, "/aliases/%s/properties", aliasName); HttpPut withoutBody = new HttpPut(baseUrl + aliasApi); assertEquals(400, httpClient.execute(withoutBody).getStatusLine().getStatusCode()); @@ -260,7 +259,7 @@ public void testModifyPropertiesV2() throws Exception { checkFooAndBarMeta(aliasName, zkStateReader, "baz", "bam"); String aliasPropertyApi = - String.format(Locale.ENGLISH, "/api/aliases/%s/properties/%s", aliasName, "foo"); + String.format(Locale.ENGLISH, "/aliases/%s/properties/%s", aliasName, "foo"); HttpPut updateByProperty = new HttpPut(baseUrl + aliasPropertyApi); updateByProperty.setEntity( new StringEntity("{ \"value\": \"zab\" }", ContentType.APPLICATION_JSON)); diff --git a/solr/core/src/test/org/apache/solr/cloud/ClusterStateMockUtil.java b/solr/core/src/test/org/apache/solr/cloud/ClusterStateMockUtil.java index 674fe60b8a6..51a3c3263f5 100644 --- a/solr/core/src/test/org/apache/solr/cloud/ClusterStateMockUtil.java +++ b/solr/core/src/test/org/apache/solr/cloud/ClusterStateMockUtil.java @@ -111,7 +111,6 @@ public static ZkStateReader buildClusterState(String clusterDescription, String. public static ZkStateReader buildClusterState( String clusterDescription, int replicationFactor, String... liveNodes) { Map slices = null; - Map replicas = null; Map collectionProps = new HashMap<>(); collectionProps.put(ZkStateReader.REPLICATION_FACTOR, Integer.toString(replicationFactor)); Map collectionStates = new HashMap<>(); @@ -138,9 +137,9 @@ public static ZkStateReader buildClusterState( collectionStates.put(docCollection.getName(), docCollection); break; case "s": - replicas = new HashMap<>(); if (collName == null) collName = "collection" + (collectionStates.size() + 1); - slice = new Slice(sliceName = "slice" + (slices.size() + 1), replicas, null, collName); + slice = + new Slice(sliceName = "slice" + (slices.size() + 1), new HashMap<>(), null, collName); slices.put(slice.getName(), slice); // hack alert: the DocCollection constructor copies over active slices to its active slice @@ -168,7 +167,7 @@ public static ZkStateReader buildClusterState( // O(n^2) alert! but this is for mocks and testing so shouldn't be used for very large // cluster states boolean leaderFound = false; - for (Map.Entry entry : replicas.entrySet()) { + for (Map.Entry entry : slice.getReplicasMap().entrySet()) { Replica value = entry.getValue(); if ("true".equals(value.get(ReplicaStateProps.LEADER))) { leaderFound = true; @@ -178,15 +177,13 @@ public static ZkStateReader buildClusterState( if (!leaderFound && !m.group(1).equals("p")) { replicaPropMap.put(ReplicaStateProps.LEADER, "true"); } - replica = new Replica(replicaName, replicaPropMap, collName, sliceName); - replicas.put(replica.getName(), replica); // hack alert: re-create slice with existing data and new replicas map so that it updates // its internal leader attribute - slice = new Slice(slice.getName(), replicas, null, collName); + slice = slice.copyWith(new Replica(replicaName, replicaPropMap, collName, sliceName)); slices.put(slice.getName(), slice); - // we don't need to update doc collection again because we aren't adding a new slice or - // changing its state + docCollection = docCollection.copyWithSlices(slices); + collectionStates.put(docCollection.getName(), docCollection); break; default: break; diff --git a/solr/core/src/test/org/apache/solr/cloud/DistribDocExpirationUpdateProcessorTest.java b/solr/core/src/test/org/apache/solr/cloud/DistribDocExpirationUpdateProcessorTest.java index 4780839e06f..143f1f4dee8 100644 --- a/solr/core/src/test/org/apache/solr/cloud/DistribDocExpirationUpdateProcessorTest.java +++ b/solr/core/src/test/org/apache/solr/cloud/DistribDocExpirationUpdateProcessorTest.java @@ -16,9 +16,6 @@ */ package org.apache.solr.cloud; -import static java.util.Collections.singletonList; -import static org.apache.solr.security.Sha256AuthenticationProvider.getSaltedHashedValue; - import java.io.IOException; import java.lang.invoke.MethodHandles; import java.util.HashMap; @@ -43,11 +40,9 @@ import org.apache.solr.common.params.SolrParams; import org.apache.solr.common.util.NamedList; import org.apache.solr.common.util.TimeSource; -import org.apache.solr.common.util.Utils; import org.apache.solr.handler.ReplicationHandler; -import org.apache.solr.security.BasicAuthPlugin; -import org.apache.solr.security.RuleBasedAuthorizationPlugin; import org.apache.solr.update.processor.DocExpirationUpdateProcessorFactory; +import org.apache.solr.util.SecurityJson; import org.apache.solr.util.TimeOut; import org.junit.After; import org.junit.Test; @@ -91,30 +86,11 @@ public void setupCluster(boolean security) throws Exception { COLLECTION = "expiring"; if (security) { - USER = "solr"; - PASS = "SolrRocksAgain"; + USER = SecurityJson.USER; + PASS = SecurityJson.PASS; COLLECTION += "_secure"; - final String SECURITY_JSON = - Utils.toJSONString( - Map.of( - "authorization", - Map.of( - "class", - RuleBasedAuthorizationPlugin.class.getName(), - "user-role", - Map.of(USER, "admin"), - "permissions", - singletonList(Map.of("name", "all", "role", "admin"))), - "authentication", - Map.of( - "class", - BasicAuthPlugin.class.getName(), - "blockUnknown", - true, - "credentials", - Map.of(USER, getSaltedHashedValue(PASS))))); - b.withSecurityJson(SECURITY_JSON); + b.withSecurityJson(SecurityJson.SIMPLE); } b.configure(); diff --git a/solr/core/src/test/org/apache/solr/cloud/MigrateReplicasTest.java b/solr/core/src/test/org/apache/solr/cloud/MigrateReplicasTest.java index 0be4ca1746b..f96b621c023 100644 --- a/solr/core/src/test/org/apache/solr/cloud/MigrateReplicasTest.java +++ b/solr/core/src/test/org/apache/solr/cloud/MigrateReplicasTest.java @@ -344,8 +344,8 @@ public void testFailOnSingleNode() throws Exception { Map r = null; String uri = - cluster.getJettySolrRunners().get(0).getBaseUrl().toString().replace("/solr", "") - + "/api/cluster/replicas/migrate"; + cluster.getJettySolrRunners().get(0).getBaseURLV2().toString() + + "/cluster/replicas/migrate"; try { httpRequest = new HttpPost(uri); diff --git a/solr/core/src/test/org/apache/solr/cloud/RecoveryZkTestWithAuth.java b/solr/core/src/test/org/apache/solr/cloud/RecoveryZkTestWithAuth.java new file mode 100644 index 00000000000..63f66736b83 --- /dev/null +++ b/solr/core/src/test/org/apache/solr/cloud/RecoveryZkTestWithAuth.java @@ -0,0 +1,121 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.solr.cloud; + +import static org.apache.solr.client.solrj.response.RequestStatusState.COMPLETED; + +import java.io.IOException; +import java.util.List; +import java.util.concurrent.TimeUnit; +import org.apache.solr.client.solrj.SolrClient; +import org.apache.solr.client.solrj.SolrQuery; +import org.apache.solr.client.solrj.SolrRequest; +import org.apache.solr.client.solrj.SolrResponse; +import org.apache.solr.client.solrj.SolrServerException; +import org.apache.solr.client.solrj.impl.CloudLegacySolrClient; +import org.apache.solr.client.solrj.impl.HttpSolrClient; +import org.apache.solr.client.solrj.request.CollectionAdminRequest; +import org.apache.solr.client.solrj.request.QueryRequest; +import org.apache.solr.client.solrj.request.UpdateRequest; +import org.apache.solr.client.solrj.response.QueryResponse; +import org.apache.solr.client.solrj.response.RequestStatusState; +import org.apache.solr.common.cloud.DocCollection; +import org.apache.solr.common.cloud.Replica; +import org.apache.solr.common.cloud.Slice; +import org.apache.solr.util.SecurityJson; +import org.junit.BeforeClass; +import org.junit.Test; + +public class RecoveryZkTestWithAuth extends SolrCloudTestCase { + @BeforeClass + public static void setupCluster() throws Exception { + cluster = + configureCluster(1) + .addConfig("conf", configset("cloud-minimal")) + .withSecurityJson(SecurityJson.SIMPLE) + .configure(); + } + + private > T withBasicAuth(T req) { + req.setBasicAuthCredentials(SecurityJson.USER, SecurityJson.PASS); + return req; + } + + private QueryResponse queryWithBasicAuth(SolrClient client, SolrQuery q) + throws IOException, SolrServerException { + return withBasicAuth(new QueryRequest(q)).process(client); + } + + @Test + public void testRecoveryWithAuthEnabled() throws Exception { + final String collection = "recoverytestwithauth"; + withBasicAuth(CollectionAdminRequest.createCollection(collection, "conf", 1, 1)) + .process(cluster.getSolrClient()); + waitForState( + "Expected a collection with one shard and one replicas", collection, clusterShape(1, 1)); + try (SolrClient solrClient = + cluster.basicSolrClientBuilder().withDefaultCollection(collection).build()) { + UpdateRequest commitReq = new UpdateRequest(); + withBasicAuth(commitReq); + for (int i = 0; i < 500; i++) { + UpdateRequest req = new UpdateRequest(); + withBasicAuth(req).add(sdoc("id", i, "name", "name = " + i)); + req.process(solrClient, collection); + if (i % 10 == 0) { + commitReq.commit(solrClient, collection); + } + } + commitReq.commit(solrClient, collection); + + withBasicAuth(CollectionAdminRequest.addReplicaToShard(collection, "shard1")); + CollectionAdminRequest.AddReplica addReplica = + CollectionAdminRequest.addReplicaToShard(collection, "shard1"); + withBasicAuth(addReplica); + RequestStatusState status = addReplica.processAndWait(collection, solrClient, 120); + assertEquals(COMPLETED, status); + cluster + .getZkStateReader() + .waitForState(collection, 120, TimeUnit.SECONDS, clusterShape(1, 2)); + DocCollection state = getCollectionState(collection); + assertShardConsistency(state.getSlice("shard1"), true); + } + } + + private void assertShardConsistency(Slice shard, boolean expectDocs) throws Exception { + List replicas = shard.getReplicas(r -> r.getState() == Replica.State.ACTIVE); + long[] numCounts = new long[replicas.size()]; + int i = 0; + for (Replica replica : replicas) { + try (var client = + new HttpSolrClient.Builder(replica.getBaseUrl()) + .withDefaultCollection(replica.getCoreName()) + .withHttpClient(((CloudLegacySolrClient) cluster.getSolrClient()).getHttpClient()) + .build()) { + var q = new SolrQuery("*:*"); + q.add("distrib", "false"); + numCounts[i] = queryWithBasicAuth(client, q).getResults().getNumFound(); + i++; + } + } + for (int j = 1; j < replicas.size(); j++) { + if (numCounts[j] != numCounts[j - 1]) + fail("Mismatch in counts between replicas"); // TODO improve this! + if (numCounts[j] == 0 && expectDocs) + fail("Expected docs on shard " + shard.getName() + " but found none"); + } + } +} diff --git a/solr/core/src/test/org/apache/solr/cloud/TestConfigSetsAPI.java b/solr/core/src/test/org/apache/solr/cloud/TestConfigSetsAPI.java index 57a067cc217..a45232a3af9 100644 --- a/solr/core/src/test/org/apache/solr/cloud/TestConfigSetsAPI.java +++ b/solr/core/src/test/org/apache/solr/cloud/TestConfigSetsAPI.java @@ -1581,7 +1581,7 @@ private long uploadGivenConfigSet( final ByteBuffer fileBytes = TestSolrConfigHandler.getFileContent(file.getAbsolutePath(), false); final String uriEnding = - "/api/cluster/configs/" + "/cluster/configs/" + configSetName + suffix + (!overwrite ? "?overwrite=false" : "") @@ -1590,8 +1590,7 @@ private long uploadGivenConfigSet( Map map = postDataAndGetResponse( cluster.getSolrClient(), - cluster.getJettySolrRunners().get(0).getBaseUrl().toString().replace("/solr", "") - + uriEnding, + cluster.getJettySolrRunners().get(0).getBaseURLV2().toString() + uriEnding, fileBytes, username, usePut); @@ -1634,7 +1633,7 @@ private long uploadSingleConfigSetFile( final ByteBuffer sampleConfigFile = TestSolrConfigHandler.getFileContent(file.getAbsolutePath(), false); final String uriEnding = - "/api/cluster/configs/" + "/cluster/configs/" + configSetName + suffix + "/" @@ -1646,8 +1645,7 @@ private long uploadSingleConfigSetFile( Map map = postDataAndGetResponse( cluster.getSolrClient(), - cluster.getJettySolrRunners().get(0).getBaseUrl().toString().replace("/solr", "") - + uriEnding, + cluster.getJettySolrRunners().get(0).getBaseURLV2().toString() + uriEnding, sampleConfigFile, username, usePut); diff --git a/solr/core/src/test/org/apache/solr/cloud/TestPullReplicaWithAuth.java b/solr/core/src/test/org/apache/solr/cloud/TestPullReplicaWithAuth.java index 63ae8044618..902c9e2c6de 100644 --- a/solr/core/src/test/org/apache/solr/cloud/TestPullReplicaWithAuth.java +++ b/solr/core/src/test/org/apache/solr/cloud/TestPullReplicaWithAuth.java @@ -16,13 +16,10 @@ */ package org.apache.solr.cloud; -import static java.util.Collections.singletonList; -import static java.util.Collections.singletonMap; import static org.apache.solr.cloud.TestPullReplica.assertNumberOfReplicas; import static org.apache.solr.cloud.TestPullReplica.assertUlogPresence; import static org.apache.solr.cloud.TestPullReplica.waitForDeletion; import static org.apache.solr.cloud.TestPullReplica.waitForNumDocsInAllReplicas; -import static org.apache.solr.security.Sha256AuthenticationProvider.getSaltedHashedValue; import java.io.IOException; import java.util.EnumSet; @@ -43,48 +40,24 @@ import org.apache.solr.common.cloud.DocCollection; import org.apache.solr.common.cloud.Replica; import org.apache.solr.common.cloud.Slice; -import org.apache.solr.common.util.Utils; -import org.apache.solr.security.BasicAuthPlugin; -import org.apache.solr.security.RuleBasedAuthorizationPlugin; +import org.apache.solr.util.SecurityJson; import org.junit.BeforeClass; import org.junit.Test; public class TestPullReplicaWithAuth extends SolrCloudTestCase { - private static final String USER = "solr"; - private static final String PASS = "SolrRocksAgain"; private static final String collectionName = "testPullReplicaWithAuth"; @BeforeClass public static void setupClusterWithSecurityEnabled() throws Exception { - final String SECURITY_JSON = - Utils.toJSONString( - Map.of( - "authorization", - Map.of( - "class", - RuleBasedAuthorizationPlugin.class.getName(), - "user-role", - singletonMap(USER, "admin"), - "permissions", - singletonList(Map.of("name", "all", "role", "admin"))), - "authentication", - Map.of( - "class", - BasicAuthPlugin.class.getName(), - "blockUnknown", - true, - "credentials", - singletonMap(USER, getSaltedHashedValue(PASS))))); - configureCluster(2) .addConfig("conf", configset("cloud-minimal")) - .withSecurityJson(SECURITY_JSON) + .withSecurityJson(SecurityJson.SIMPLE) .configure(); } private > T withBasicAuth(T req) { - req.setBasicAuthCredentials(USER, PASS); + req.setBasicAuthCredentials(SecurityJson.USER, SecurityJson.PASS); return req; } @@ -124,7 +97,8 @@ public void testPKIAuthWorksForPullReplication() throws Exception { } List pullReplicas = s.getReplicas(EnumSet.of(Replica.Type.PULL)); - waitForNumDocsInAllReplicas(numDocs, pullReplicas, "*:*", USER, PASS); + waitForNumDocsInAllReplicas( + numDocs, pullReplicas, "*:*", SecurityJson.USER, SecurityJson.PASS); for (Replica r : pullReplicas) { try (SolrClient pullReplicaClient = getHttpSolrClient(r)) { @@ -170,7 +144,7 @@ public void testPKIAuthWorksForPullReplication() throws Exception { s = docCollection.getSlices().iterator().next(); pullReplicas = s.getReplicas(EnumSet.of(Replica.Type.PULL)); assertEquals(numPullReplicas, pullReplicas.size()); - waitForNumDocsInAllReplicas(numDocs, pullReplicas, "*:*", USER, PASS); + waitForNumDocsInAllReplicas(numDocs, pullReplicas, "*:*", SecurityJson.USER, SecurityJson.PASS); withBasicAuth(CollectionAdminRequest.deleteCollection(collectionName)) .process(cluster.getSolrClient()); diff --git a/solr/core/src/test/org/apache/solr/cloud/ZkControllerTest.java b/solr/core/src/test/org/apache/solr/cloud/ZkControllerTest.java index f1df3febab4..ff892a74f00 100644 --- a/solr/core/src/test/org/apache/solr/cloud/ZkControllerTest.java +++ b/solr/core/src/test/org/apache/solr/cloud/ZkControllerTest.java @@ -23,8 +23,12 @@ import java.nio.charset.StandardCharsets; import java.nio.file.Path; +import java.time.Duration; +import java.time.Instant; import java.util.Collections; import java.util.List; +import java.util.Map; +import java.util.Optional; import java.util.Properties; import java.util.concurrent.CountDownLatch; import java.util.concurrent.ExecutorService; @@ -34,7 +38,9 @@ import org.apache.solr.SolrTestCaseJ4; import org.apache.solr.common.MapWriter; import org.apache.solr.common.cloud.ClusterProperties; +import org.apache.solr.common.cloud.ClusterState; import org.apache.solr.common.cloud.DocCollection; +import org.apache.solr.common.cloud.Replica; import org.apache.solr.common.cloud.SolrZkClient; import org.apache.solr.common.cloud.ZkNodeProps; import org.apache.solr.common.cloud.ZkStateReader; @@ -55,21 +61,15 @@ import org.apache.solr.util.LogLevel; import org.apache.zookeeper.CreateMode; import org.apache.zookeeper.data.Stat; -import org.junit.AfterClass; -import org.junit.BeforeClass; +import org.hamcrest.Matchers; import org.junit.Test; @SolrTestCaseJ4.SuppressSSL -public class ZkControllerTest extends SolrTestCaseJ4 { +public class ZkControllerTest extends SolrCloudTestCase { static final int TIMEOUT = 10000; - @BeforeClass - public static void beforeClass() {} - - @AfterClass - public static void afterClass() {} - + @Test public void testNodeNameUrlConversion() throws Exception { // nodeName from parts @@ -152,6 +152,7 @@ public void testNodeNameUrlConversion() throws Exception { } } + @Test public void testGetHostName() throws Exception { Path zkDir = createTempDir("zkData"); @@ -180,6 +181,7 @@ public void testGetHostName() throws Exception { } @LogLevel(value = "org.apache.solr.cloud=DEBUG;org.apache.solr.cloud.overseer=DEBUG") + @Test public void testPublishAndWaitForDownStates() throws Exception { /* @@ -197,9 +199,8 @@ cores are down then the method will return immediately but if it uses coreNodeNa String nodeName = "127.0.0.1:8983_solr"; - ZkTestServer server = new ZkTestServer(zkDir); try { - server.run(); + cluster = configureCluster(1).configure(); AtomicReference zkControllerRef = new AtomicReference<>(); CoreContainer cc = @@ -223,9 +224,16 @@ public List getCoreDescriptors() { ZkController zkController = null; try { - CloudConfig cloudConfig = new CloudConfig.CloudConfigBuilder("127.0.0.1", 8983).build(); + CloudConfig cloudConfig = + new CloudConfig.CloudConfigBuilder("127.0.0.1", 8983) + .setUseDistributedClusterStateUpdates( + Boolean.getBoolean("solr.distributedClusterStateUpdates")) + .setUseDistributedCollectionConfigSetExecution( + Boolean.getBoolean("solr.distributedCollectionConfigSetExecution")) + .build(); zkController = - new ZkController(cc, server.getZkAddress(), TIMEOUT, cloudConfig, () -> null); + new ZkController( + cc, cluster.getZkServer().getZkAddress(), TIMEOUT, cloudConfig, () -> null); zkControllerRef.set(zkController); zkController @@ -258,6 +266,7 @@ public List getCoreDescriptors() { zkController.getOverseerJobQueue().offer(Utils.toJSON(m)); } + // Add an active replica that shares the same core name, but on a non existent host MapWriter propMap = ew -> ew.put(Overseer.QUEUE_OPERATION, ADDREPLICA.toLower()) @@ -279,6 +288,7 @@ public List getCoreDescriptors() { zkController.getOverseerJobQueue().offer(propMap); } + // Add an down replica that shares the same core name, also on a non existent host propMap = ew -> ew.put(Overseer.QUEUE_OPERATION, ADDREPLICA.toLower()) @@ -299,20 +309,77 @@ public List getCoreDescriptors() { zkController.getOverseerJobQueue().offer(propMap); } - zkController.getZkStateReader().forciblyRefreshAllClusterStateSlow(); + // Add an active replica on the existing host. This replica will exist in the cluster state + // but not + // on the disk. We are testing that this replica is also put to "DOWN" even though it + // doesn't exist locally. + propMap = + ew -> + ew.put(Overseer.QUEUE_OPERATION, ADDREPLICA.toLower()) + .put(COLLECTION_PROP, collectionName) + .put(SHARD_ID_PROP, "shard1") + .put(ZkStateReader.NODE_NAME_PROP, nodeName) + .put(ZkStateReader.CORE_NAME_PROP, collectionName + "-not-on-disk") + .put(ZkStateReader.STATE_PROP, "active"); + if (zkController.getDistributedClusterStateUpdater().isDistributedStateUpdate()) { + zkController + .getDistributedClusterStateUpdater() + .doSingleStateUpdate( + DistributedClusterStateUpdater.MutatingCommand.SliceAddReplica, + new ZkNodeProps(propMap), + zkController.getSolrCloudManager(), + zkController.getZkStateReader()); + } else { + zkController.getOverseerJobQueue().offer(propMap); + } - long now = System.nanoTime(); - long timeout = now + TimeUnit.NANOSECONDS.convert(5, TimeUnit.SECONDS); + // Wait for the overseer to process all the replica additions + if (!zkController.getDistributedClusterStateUpdater().isDistributedStateUpdate()) { + zkController + .getZkStateReader() + .waitForState( + collectionName, + 10, + TimeUnit.SECONDS, + ((liveNodes, collectionState) -> + Optional.ofNullable(collectionState) + .map(DocCollection::getReplicas) + .map(List::size) + .orElse(0) + == 3)); + } + + Instant now = Instant.now(); zkController.publishAndWaitForDownStates(5); - assertTrue( - "The ZkController.publishAndWaitForDownStates should have timed out but it didn't", - System.nanoTime() >= timeout); + assertThat( + "The ZkController.publishAndWaitForDownStates should not have timed out but it did", + Duration.between(now, Instant.now()), + Matchers.lessThanOrEqualTo(Duration.ofSeconds(5))); + + zkController.getZkStateReader().forciblyRefreshAllClusterStateSlow(); + ClusterState clusterState = zkController.getClusterState(); + + Map> replicasOnNode = + clusterState.getReplicaNamesPerCollectionOnNode(nodeName); + assertNotNull("There should be replicas on the existing node", replicasOnNode); + List replicas = replicasOnNode.get(collectionName); + assertNotNull("There should be replicas for the collection on the existing node", replicas); + assertEquals( + "Wrong number of replicas for the collection on the existing node", 1, replicas.size()); + for (Replica replica : replicas) { + assertEquals( + "Replica " + + replica.getName() + + " is not DOWN, even though it is on the node that should be DOWN", + Replica.State.DOWN, + replica.getState()); + } } finally { if (zkController != null) zkController.close(); cc.shutdown(); } } finally { - server.shutdown(); + cluster.shutdown(); } } diff --git a/solr/core/src/test/org/apache/solr/cloud/api/collections/ShardSplitTest.java b/solr/core/src/test/org/apache/solr/cloud/api/collections/ShardSplitTest.java index 517a718416c..59bad33ba96 100644 --- a/solr/core/src/test/org/apache/solr/cloud/api/collections/ShardSplitTest.java +++ b/solr/core/src/test/org/apache/solr/cloud/api/collections/ShardSplitTest.java @@ -1277,9 +1277,7 @@ protected void splitShard( QueryRequest request = new QueryRequest(params); request.setPath("/admin/collections"); - String baseUrl = - ((HttpSolrClient) shardToJetty.get(SHARD1).get(0).client.getSolrClient()).getBaseURL(); - baseUrl = baseUrl.substring(0, baseUrl.length() - "collection1".length()); + String baseUrl = shardToJetty.get(SHARD1).get(0).jetty.getBaseUrl().toString(); try (SolrClient baseServer = new HttpSolrClient.Builder(baseUrl) diff --git a/solr/core/src/test/org/apache/solr/core/BlobRepositoryCloudTest.java b/solr/core/src/test/org/apache/solr/core/BlobRepositoryCloudTest.java deleted file mode 100644 index 8fe8c34f79b..00000000000 --- a/solr/core/src/test/org/apache/solr/core/BlobRepositoryCloudTest.java +++ /dev/null @@ -1,127 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.solr.core; - -import java.io.IOException; -import java.nio.ByteBuffer; -import java.nio.charset.StandardCharsets; -import java.nio.file.Path; -import org.apache.solr.client.solrj.SolrQuery; -import org.apache.solr.client.solrj.SolrServerException; -import org.apache.solr.client.solrj.impl.CloudSolrClient; -import org.apache.solr.client.solrj.request.CollectionAdminRequest; -import org.apache.solr.client.solrj.response.QueryResponse; -import org.apache.solr.cloud.SolrCloudTestCase; -import org.apache.solr.common.SolrDocumentList; -import org.apache.solr.common.SolrInputDocument; -import org.apache.solr.common.cloud.ZkStateReader; -import org.apache.solr.common.params.CollectionAdminParams; -import org.apache.solr.handler.TestBlobHandler; -import org.junit.BeforeClass; -import org.junit.Test; - -public class BlobRepositoryCloudTest extends SolrCloudTestCase { - - public static final Path TEST_PATH = getFile("solr/configsets").toPath(); - - @BeforeClass - public static void setupCluster() throws Exception { - configureCluster(1) // only sharing *within* a node - .addConfig("configname", TEST_PATH.resolve("resource-sharing")) - .configure(); - // Thread.sleep(2000); - CollectionAdminRequest.createCollection(CollectionAdminParams.SYSTEM_COLL, null, 1, 1) - .process(cluster.getSolrClient()); - // test component will fail if it can't find a blob with this data by this name - TestBlobHandler.postData( - cluster.getSolrClient(), - findLiveNodeURI(), - "testResource", - ByteBuffer.wrap("foo,bar\nbaz,bam".getBytes(StandardCharsets.UTF_8))); - // Thread.sleep(2000); - // if these don't load we probably failed to post the blob above - CollectionAdminRequest.createCollection("col1", "configname", 1, 1) - .process(cluster.getSolrClient()); - CollectionAdminRequest.createCollection("col2", "configname", 1, 1) - .process(cluster.getSolrClient()); - - SolrInputDocument document = new SolrInputDocument(); - document.addField("id", "1"); - document.addField("text", "col1"); - CloudSolrClient solrClient = cluster.getSolrClient(); - solrClient.add("col1", document); - solrClient.commit("col1"); - document = new SolrInputDocument(); - document.addField("id", "1"); - document.addField("text", "col2"); - solrClient.add("col2", document); - solrClient.commit("col2"); - Thread.sleep(2000); - } - - @Test - public void test() throws Exception { - // This test relies on the installation of ResourceSharingTestComponent which has 2 useful - // properties: - // 1. it will fail to initialize if it doesn't find a 2 line CSV like foo,bar\nbaz,bam thus - // validating that we are properly pulling data from the blob store - // 2. It replaces any q for a query request to /select with "text:" where is - // the name of the last collection to run a query. It does this by caching a shared resource of - // type ResourceSharingTestComponent.TestObject, and the following sequence is proof that either - // collection can tell if it was (or was not) the last collection to issue a query by consulting - // the shared object - assertLastQueryNotToCollection("col1"); - assertLastQueryNotToCollection("col2"); - assertLastQueryNotToCollection("col1"); - assertLastQueryToCollection("col1"); - assertLastQueryNotToCollection("col2"); - assertLastQueryToCollection("col2"); - } - - // TODO: move this up to parent class? - private static String findLiveNodeURI() { - ZkStateReader zkStateReader = cluster.getZkStateReader(); - return zkStateReader.getBaseUrlForNodeName( - zkStateReader - .getClusterState() - .getCollection(".system") - .getSlices() - .iterator() - .next() - .getLeader() - .getNodeName()); - } - - private void assertLastQueryToCollection(String collection) - throws SolrServerException, IOException { - assertEquals(1, getSolrDocuments(collection).size()); - } - - private void assertLastQueryNotToCollection(String collection) - throws SolrServerException, IOException { - assertEquals(0, getSolrDocuments(collection).size()); - } - - private SolrDocumentList getSolrDocuments(String collection) - throws SolrServerException, IOException { - SolrQuery query = new SolrQuery("*:*"); - CloudSolrClient client = cluster.getSolrClient(); - QueryResponse resp1 = client.query(collection, query); - return resp1.getResults(); - } -} diff --git a/solr/core/src/test/org/apache/solr/core/BlobRepositoryMockingTest.java b/solr/core/src/test/org/apache/solr/core/BlobRepositoryMockingTest.java deleted file mode 100644 index cf37174bbc3..00000000000 --- a/solr/core/src/test/org/apache/solr/core/BlobRepositoryMockingTest.java +++ /dev/null @@ -1,191 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.solr.core; - -import static org.mockito.Mockito.mock; -import static org.mockito.Mockito.reset; -import static org.mockito.Mockito.verify; -import static org.mockito.Mockito.when; - -import java.io.IOException; -import java.io.InputStream; -import java.io.InputStreamReader; -import java.io.StringWriter; -import java.nio.ByteBuffer; -import java.nio.charset.Charset; -import java.nio.charset.StandardCharsets; -import java.util.Objects; -import java.util.concurrent.ConcurrentHashMap; -import org.apache.solr.SolrTestCaseJ4; -import org.apache.solr.common.SolrException; -import org.junit.Before; -import org.junit.BeforeClass; -import org.junit.Test; - -public class BlobRepositoryMockingTest extends SolrTestCaseJ4 { - - private static final Charset UTF8 = StandardCharsets.UTF_8; - private static final String[][] PARSED = - new String[][] {{"foo", "bar", "baz"}, {"bang", "boom", "bash"}}; - private static final String BLOBSTR = "foo,bar,baz\nbang,boom,bash"; - private CoreContainer mockContainer = mock(CoreContainer.class); - - @SuppressWarnings({"unchecked", "rawtypes"}) - private ConcurrentHashMap blobStorage; - - BlobRepository repository; - ByteBuffer blobData = ByteBuffer.wrap(BLOBSTR.getBytes(UTF8)); - boolean blobFetched = false; - String blobKey = ""; - String url = null; - ByteBuffer filecontent = null; - - @BeforeClass - public static void beforeClass() { - SolrTestCaseJ4.assumeWorkingMockito(); - } - - @Override - @Before - public void setUp() throws Exception { - super.setUp(); - blobFetched = false; - blobKey = ""; - reset(mockContainer); - blobStorage = new ConcurrentHashMap<>(); - repository = - new BlobRepository(mockContainer) { - @Override - ByteBuffer fetchBlob(String key) { - blobKey = key; - blobFetched = true; - return blobData; - } - - @Override - ByteBuffer fetchFromUrl(String key, String url) { - if (!Objects.equals(url, BlobRepositoryMockingTest.this.url)) return null; - blobKey = key; - blobFetched = true; - return filecontent; - } - - @Override - @SuppressWarnings({"rawtypes"}) - ConcurrentHashMap createMap() { - return blobStorage; - } - }; - } - - @Test(expected = SolrException.class) - public void testCloudOnly() { - when(mockContainer.isZooKeeperAware()).thenReturn(false); - try { - repository.getBlobIncRef("foo!"); - } catch (SolrException e) { - verify(mockContainer).isZooKeeperAware(); - throw e; - } - } - - @Test - public void testGetBlobIncrRefString() { - when(mockContainer.isZooKeeperAware()).thenReturn(true); - BlobRepository.BlobContentRef ref = repository.getBlobIncRef("foo!"); - assertEquals("foo!", blobKey); - assertTrue(blobFetched); - assertNotNull(ref.blob); - assertEquals(blobData, ref.blob.get()); - verify(mockContainer).isZooKeeperAware(); - assertNotNull(blobStorage.get("foo!")); - } - - @Test - public void testGetBlobIncrRefByUrl() throws Exception { - when(mockContainer.isZooKeeperAware()).thenReturn(true); - filecontent = TestSolrConfigHandler.getFileContent("runtimecode/runtimelibs_v2.jar.bin"); - url = "http://localhost:8080/myjar/location.jar"; - BlobRepository.BlobContentRef ref = - repository.getBlobIncRef( - "filefoo", - null, - url, - "bc5ce45ad281b6a08fb7e529b1eb475040076834816570902acb6ebdd809410e31006efdeaa7f78a6c35574f3504963f5f7e4d92247d0eb4db3fc9abdda5d417"); - assertEquals("filefoo", blobKey); - assertTrue(blobFetched); - assertNotNull(ref.blob); - assertEquals(filecontent, ref.blob.get()); - verify(mockContainer).isZooKeeperAware(); - try { - repository.getBlobIncRef("filefoo", null, url, "WRONG-SHA512-KEY"); - fail("expected exception"); - } catch (Exception e) { - assertTrue(e.getMessage().contains(" expected sha512 hash : WRONG-SHA512-KEY , actual :")); - } - - url = null; - filecontent = null; - } - - @Test - public void testCachedAlready() { - when(mockContainer.isZooKeeperAware()).thenReturn(true); - blobStorage.put("foo!", new BlobRepository.BlobContent("foo!", blobData)); - BlobRepository.BlobContentRef ref = repository.getBlobIncRef("foo!"); - assertEquals("", blobKey); - assertFalse(blobFetched); - assertNotNull(ref.blob); - assertEquals(blobData, ref.blob.get()); - verify(mockContainer).isZooKeeperAware(); - assertNotNull("Key was not mapped to a BlobContent instance.", blobStorage.get("foo!")); - } - - @Test - public void testGetBlobIncrRefStringDecoder() { - when(mockContainer.isZooKeeperAware()).thenReturn(true); - BlobRepository.BlobContentRef ref = - repository.getBlobIncRef( - "foo!", - new BlobRepository.Decoder<>() { - @Override - public String[][] decode(InputStream inputStream) { - StringWriter writer = new StringWriter(); - try { - new InputStreamReader(inputStream, UTF8).transferTo(writer); - } catch (IOException e) { - throw new RuntimeException(e); - } - - assertEquals(BLOBSTR, writer.toString()); - return PARSED; - } - - @Override - public String getName() { - return "mocked"; - } - }); - assertEquals("foo!", blobKey); - assertTrue(blobFetched); - assertNotNull(ref.blob); - assertEquals(PARSED, ref.blob.get()); - verify(mockContainer).isZooKeeperAware(); - assertNotNull(blobStorage.get("foo!mocked")); - } -} diff --git a/solr/core/src/test/org/apache/solr/core/CachingDirectoryFactoryTest.java b/solr/core/src/test/org/apache/solr/core/CachingDirectoryFactoryTest.java index 404a7987930..3dd193703cf 100644 --- a/solr/core/src/test/org/apache/solr/core/CachingDirectoryFactoryTest.java +++ b/solr/core/src/test/org/apache/solr/core/CachingDirectoryFactoryTest.java @@ -94,7 +94,9 @@ public void reorderingTest() throws Exception { df.remove(pathAString, addIfTrue(deleteAfter, pathAString, removeAfter.getAsBoolean())); df.doneWithDirectory(a); df.release(a); - assertTrue(pathA.toFile().exists()); // we know there are subdirs that should prevent removal + assertTrue( + "The path " + pathA + " should exist because it has subdirs that prevent removal", + pathA.toFile().exists()); // we know there are subdirs that should prevent removal Collections.shuffle(Arrays.asList(subdirs), r); for (Map.Entry e : subdirs) { boolean after = removeAfter.getAsBoolean(); @@ -105,18 +107,26 @@ public void reorderingTest() throws Exception { df.release(d); boolean exists = Path.of(pathString).toFile().exists(); if (after) { - assertTrue(exists); + assertTrue( + "Path " + pathString + " should be removed after, but it no longer exists", exists); } else { - assertFalse(exists); + assertFalse( + "Path " + pathString + " should not wait to be removed, but it still exists", exists); } } if (alwaysBefore) { - assertTrue(deleteAfter.isEmpty()); + assertTrue( + "The directory should always be deleted before, but it has items that it should be deleted after", + deleteAfter.isEmpty()); } if (deleteAfter.isEmpty()) { - assertTrue(PathUtils.isEmpty(tmpDir)); + assertTrue( + "There are no subdirs to delete afterwards, therefore the directory should have been emptied", + PathUtils.isEmpty(tmpDir)); } else { - assertTrue(pathA.toFile().exists()); // parent must still be present + assertTrue( + "There are subdirs to wait on, so the parent directory should still exist", + pathA.toFile().exists()); // parent must still be present for (Map.Entry e : subdirs) { String pathString = e.getKey(); boolean exists = new File(pathString).exists(); @@ -128,7 +138,7 @@ public void reorderingTest() throws Exception { } } } - assertTrue(PathUtils.isEmpty(tmpDir)); + assertTrue("Dir " + tmpDir + " should be empty at the end", PathUtils.isEmpty(tmpDir)); } private static boolean addIfTrue(Set deleteAfter, String path, boolean after) { diff --git a/solr/core/src/test/org/apache/solr/filestore/TestDistribFileStore.java b/solr/core/src/test/org/apache/solr/filestore/TestDistribFileStore.java index a6fa83b579c..62668789d22 100644 --- a/solr/core/src/test/org/apache/solr/filestore/TestDistribFileStore.java +++ b/solr/core/src/test/org/apache/solr/filestore/TestDistribFileStore.java @@ -169,7 +169,7 @@ public void testFileStoreManagement() throws Exception { return true; }); for (JettySolrRunner jettySolrRunner : cluster.getJettySolrRunners()) { - String baseUrl = jettySolrRunner.getBaseUrl().toString().replace("/solr", "/api"); + String baseUrl = jettySolrRunner.getBaseURLV2().toString(); String url = baseUrl + "/node/files/package/mypkg/v1.0?wt=javabin"; assertResponseValues(10, new Fetcher(url, jettySolrRunner), expected); } @@ -196,7 +196,7 @@ public static void checkAllNodesForFile( boolean verifyContent) throws Exception { for (JettySolrRunner jettySolrRunner : cluster.getJettySolrRunners()) { - String baseUrl = jettySolrRunner.getBaseUrl().toString().replace("/solr", "/api"); + String baseUrl = jettySolrRunner.getBaseURLV2().toString(); String url = baseUrl + "/node/files" + path + "?wt=javabin&meta=true"; assertResponseValues(10, new Fetcher(url, jettySolrRunner), expected); diff --git a/solr/core/src/test/org/apache/solr/handler/TestContainerPlugin.java b/solr/core/src/test/org/apache/solr/handler/TestContainerPlugin.java index eabadb80ac8..bd19c67f935 100644 --- a/solr/core/src/test/org/apache/solr/handler/TestContainerPlugin.java +++ b/solr/core/src/test/org/apache/solr/handler/TestContainerPlugin.java @@ -572,7 +572,7 @@ private V2Request postPlugin(Object payload) { public void waitForAllNodesToSync(String path, Map expected) throws Exception { for (JettySolrRunner jettySolrRunner : cluster.getJettySolrRunners()) { - String baseUrl = jettySolrRunner.getBaseUrl().toString().replace("/solr", "/api"); + String baseUrl = jettySolrRunner.getBaseURLV2().toString(); String url = baseUrl + path + "?wt=javabin"; TestDistribFileStore.assertResponseValues(1, new Fetcher(url, jettySolrRunner), expected); } diff --git a/solr/core/src/test/org/apache/solr/handler/admin/ZookeeperReadAPITest.java b/solr/core/src/test/org/apache/solr/handler/admin/ZookeeperReadAPITest.java index e77290dc230..39d494e177c 100644 --- a/solr/core/src/test/org/apache/solr/handler/admin/ZookeeperReadAPITest.java +++ b/solr/core/src/test/org/apache/solr/handler/admin/ZookeeperReadAPITest.java @@ -52,8 +52,10 @@ public void setUp() throws Exception { super.setUp(); baseUrl = cluster.getJettySolrRunner(0).getBaseUrl(); - basezk = baseUrl.toString().replace("/solr", "/api") + "/cluster/zookeeper/data"; - basezkls = baseUrl.toString().replace("/solr", "/api") + "/cluster/zookeeper/children"; + + String baseUrlV2 = cluster.getJettySolrRunner(0).getBaseURLV2().toString(); + basezk = baseUrlV2 + "/cluster/zookeeper/data"; + basezkls = baseUrlV2 + "/cluster/zookeeper/children"; } @After diff --git a/solr/core/src/test/org/apache/solr/handler/component/ResourceSharingTestComponent.java b/solr/core/src/test/org/apache/solr/handler/component/ResourceSharingTestComponent.java deleted file mode 100644 index e653b1f6de3..00000000000 --- a/solr/core/src/test/org/apache/solr/handler/component/ResourceSharingTestComponent.java +++ /dev/null @@ -1,142 +0,0 @@ -/* - * Licensed to the Apache Software Foundation (ASF) under one or more - * contributor license agreements. See the NOTICE file distributed with - * this work for additional information regarding copyright ownership. - * The ASF licenses this file to You under the Apache License, Version 2.0 - * (the "License"); you may not use this file except in compliance with - * the License. You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -package org.apache.solr.handler.component; - -import static org.junit.Assert.assertEquals; - -import java.io.BufferedReader; -import java.io.InputStream; -import java.io.InputStreamReader; -import java.lang.invoke.MethodHandles; -import java.nio.charset.StandardCharsets; -import java.util.HashMap; -import java.util.Map; -import java.util.stream.Stream; -import org.apache.solr.common.params.ModifiableSolrParams; -import org.apache.solr.common.params.SolrParams; -import org.apache.solr.core.BlobRepository; -import org.apache.solr.core.SolrCore; -import org.apache.solr.util.plugin.SolrCoreAware; -import org.slf4j.Logger; -import org.slf4j.LoggerFactory; - -public class ResourceSharingTestComponent extends SearchComponent implements SolrCoreAware { - private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass()); - - private SolrCore core; - private volatile BlobRepository.BlobContent blob; - - @SuppressWarnings("SynchronizeOnNonFinalField") - @Override - public void prepare(ResponseBuilder rb) { - SolrParams params = rb.req.getParams(); - ModifiableSolrParams mParams = new ModifiableSolrParams(params); - String q = "text:" + getTestObj().getLastCollection(); - mParams.set("q", q); // search for the last collection name. - // This should cause the param to show up in the response... - rb.req.setParams(mParams); - getTestObj().setLastCollection(core.getCoreDescriptor().getCollectionName()); - } - - @Override - public void process(ResponseBuilder rb) {} - - @Override - public String getDescription() { - return "ResourceSharingTestComponent"; - } - - TestObject getTestObj() { - return this.blob.get(); - } - - @SuppressWarnings("unchecked") - @Override - public void inform(SolrCore core) { - log.info("Informing test component..."); - this.core = core; - this.blob = core.loadDecodeAndCacheBlob(getKey(), new DumbCsvDecoder()).blob; - log.info("Test component informed!"); - } - - private String getKey() { - return getResourceName() + "/" + getResourceVersion(); - } - - public String getResourceName() { - return "testResource"; - } - - public String getResourceVersion() { - return "1"; - } - - class DumbCsvDecoder implements BlobRepository.Decoder { - private final Map dict = new HashMap<>(); - - public DumbCsvDecoder() {} - - void processSimpleCsvRow(String string) { - String[] row = string.split(","); // dumbest csv parser ever... :) - getDict().put(row[0], row[1]); - } - - public Map getDict() { - return dict; - } - - @Override - public TestObject decode(InputStream inputStream) { - // loading a tiny csv like: - // - // foo,bar - // baz,bam - - try (Stream lines = - new BufferedReader(new InputStreamReader(inputStream, StandardCharsets.UTF_8)).lines()) { - lines.forEach(this::processSimpleCsvRow); - } catch (Exception e) { - log.error("failed to read dictionary {}", getResourceName()); - throw new RuntimeException("Cannot load dictionary ", e); - } - - assertEquals("bar", dict.get("foo")); - assertEquals("bam", dict.get("baz")); - if (log.isInfoEnabled()) { - log.info("Loaded {} using {}", getDict().size(), this.getClass().getClassLoader()); - } - - // if we get here we have seen the data from the blob and all we need is to test that two - // collections are able to see the same object... - return new TestObject(); - } - } - - public static class TestObject { - public static final String NEVER_UPDATED = "never updated"; - private volatile String lastCollection = NEVER_UPDATED; - - public String getLastCollection() { - return this.lastCollection; - } - - public void setLastCollection(String lastCollection) { - this.lastCollection = lastCollection; - } - } -} diff --git a/solr/core/src/test/org/apache/solr/handler/designer/TestSchemaDesignerAPI.java b/solr/core/src/test/org/apache/solr/handler/designer/TestSchemaDesignerAPI.java index 57d5a6b47a8..90a7685d83e 100644 --- a/solr/core/src/test/org/apache/solr/handler/designer/TestSchemaDesignerAPI.java +++ b/solr/core/src/test/org/apache/solr/handler/designer/TestSchemaDesignerAPI.java @@ -70,7 +70,7 @@ public static void createCluster() throws Exception { configureCluster(1) .addConfig(DEFAULT_CONFIGSET_NAME, new File(ExternalPaths.DEFAULT_CONFIGSET).toPath()) .configure(); - // SchemaDesignerAPI depends on the blob store + // SchemaDesignerAPI depends on the blob store ".system" collection existing. CollectionAdminRequest.createCollection(BLOB_STORE_ID, 1, 1).process(cluster.getSolrClient()); cluster.waitForActiveCollection(BLOB_STORE_ID, 1, 1); } diff --git a/solr/core/src/test/org/apache/solr/pkg/TestPackages.java b/solr/core/src/test/org/apache/solr/pkg/TestPackages.java index d70aa372737..aae36019a65 100644 --- a/solr/core/src/test/org/apache/solr/pkg/TestPackages.java +++ b/solr/core/src/test/org/apache/solr/pkg/TestPackages.java @@ -691,8 +691,7 @@ public void testAPI() throws Exception { // So far we have been verifying the details with ZK directly // use the package read API to verify with each node that it has the correct data for (JettySolrRunner jetty : cluster.getJettySolrRunners()) { - String path = - jetty.getBaseUrl().toString().replace("/solr", "/api") + "/cluster/package?wt=javabin"; + String path = jetty.getBaseURLV2().toString() + "/cluster/package?wt=javabin"; TestDistribFileStore.assertResponseValues( 10, new Callable() { diff --git a/solr/core/src/test/org/apache/solr/search/DistributedReRankExplainTest.java b/solr/core/src/test/org/apache/solr/search/DistributedReRankExplainTest.java index a366ba2e67a..a0c99cb7f4f 100644 --- a/solr/core/src/test/org/apache/solr/search/DistributedReRankExplainTest.java +++ b/solr/core/src/test/org/apache/solr/search/DistributedReRankExplainTest.java @@ -17,9 +17,11 @@ package org.apache.solr.search; +import static org.hamcrest.Matchers.containsString; +import static org.hamcrest.Matchers.not; + import java.lang.invoke.MethodHandles; import java.util.Map; -import java.util.Random; import org.apache.solr.SolrTestCaseJ4; import org.apache.solr.client.solrj.impl.CloudSolrClient; import org.apache.solr.client.solrj.request.CollectionAdminRequest; @@ -29,7 +31,8 @@ import org.apache.solr.cloud.SolrCloudTestCase; import org.apache.solr.common.SolrInputDocument; import org.apache.solr.common.params.CommonParams; -import org.apache.solr.common.params.ModifiableSolrParams; +import org.apache.solr.common.params.ShardParams; +import org.apache.solr.common.params.SolrParams; import org.junit.BeforeClass; import org.junit.Test; import org.slf4j.Logger; @@ -60,10 +63,7 @@ public static void setupCluster() throws Exception { .process(cluster.getSolrClient()); cluster.waitForActiveCollection(collection, 2, 2); - } - @Test - public void testReRankExplain() throws Exception { CloudSolrClient client = cluster.getSolrClient(); UpdateRequest updateRequest = new UpdateRequest(); for (int i = 0; i < 100; i++) { @@ -74,35 +74,90 @@ public void testReRankExplain() throws Exception { } updateRequest.process(client, COLLECTIONORALIAS); client.commit(COLLECTIONORALIAS); + } + + @Test + public void testDebugTrue() throws Exception { + doTestReRankExplain(params(CommonParams.DEBUG_QUERY, "true")); + doTestReRankExplain(params(CommonParams.DEBUG, "true")); + } + + @Test + public void testDebugAll() throws Exception { + doTestReRankExplain(params(CommonParams.DEBUG, "all")); + } + + @Test + public void testDebugResults() throws Exception { + doTestReRankExplain(params(CommonParams.DEBUG, CommonParams.RESULTS)); + } + + private void doTestReRankExplain(final SolrParams debugParams) throws Exception { + final String reRankMainScale = + "{!rerank reRankDocs=10 reRankMainScale=0-10 reRankQuery='test_s:hello'}"; + final String reRankScale = + "{!rerank reRankDocs=10 reRankScale=0-10 reRankQuery='test_s:hello'}"; + + { // multi-pass reRankMainScale + final QueryResponse queryResponse = + doQueryAndCommonChecks( + SolrParams.wrapDefaults(params(CommonParams.RQ, reRankMainScale), debugParams)); + final Map debug = queryResponse.getDebugMap(); + assertNotNull(debug); + final String explain = debug.get("explain").toString(); + assertThat(explain, containsString("ReRank Scaling effects unkown")); + } + + { // single-pass reRankMainScale + final QueryResponse queryResponse = + doQueryAndCommonChecks( + SolrParams.wrapDefaults( + params(CommonParams.RQ, reRankMainScale, ShardParams.DISTRIB_SINGLE_PASS, "true"), + debugParams)); + final Map debug = queryResponse.getDebugMap(); + assertNotNull(debug); + final String explain = debug.get("explain").toString(); + assertThat( + explain, + containsString("5.0101576 = combined scaled first and unscaled second pass score ")); + assertThat(explain, not(containsString("ReRank Scaling effects unkown"))); + } + + { // multi-pass reRankMainScale + final QueryResponse queryResponse = + doQueryAndCommonChecks( + SolrParams.wrapDefaults(params(CommonParams.RQ, reRankScale), debugParams)); + final Map debug = queryResponse.getDebugMap(); + assertNotNull(debug); + final String explain = debug.get("explain").toString(); + assertThat(explain, containsString("ReRank Scaling effects unkown")); + } + + { // single-pass reRankMainScale + final QueryResponse queryResponse = + doQueryAndCommonChecks( + SolrParams.wrapDefaults( + params(CommonParams.RQ, reRankScale, ShardParams.DISTRIB_SINGLE_PASS, "true"), + debugParams)); + final Map debug = queryResponse.getDebugMap(); + assertNotNull(debug); + final String explain = debug.get("explain").toString(); + assertThat( + explain, + containsString("10.005078 = combined unscaled first and scaled second pass score ")); + assertThat(explain, not(containsString("ReRank Scaling effects unkown"))); + } + } + + private QueryResponse doQueryAndCommonChecks(final SolrParams params) throws Exception { + final CloudSolrClient client = cluster.getSolrClient(); + final QueryRequest queryRequest = + new QueryRequest( + SolrParams.wrapDefaults( + params, params(CommonParams.Q, "test_s:hello", "fl", "id,test_s,score"))); - String[] debugParams = {CommonParams.DEBUG, CommonParams.DEBUG_QUERY}; - Random random = random(); - ModifiableSolrParams solrParams = new ModifiableSolrParams(); - String reRank = "{!rerank reRankDocs=10 reRankMainScale=0-10 reRankQuery='test_s:hello'}"; - solrParams - .add("q", "test_s:hello") - .add(debugParams[random.nextInt(2)], "true") - .add(CommonParams.RQ, reRank); - QueryRequest queryRequest = new QueryRequest(solrParams); - QueryResponse queryResponse = queryRequest.process(client, COLLECTIONORALIAS); - Map debug = queryResponse.getDebugMap(); - assertNotNull(debug); - String explain = debug.get("explain").toString(); - assertTrue( - explain.contains("5.0101576 = combined scaled first and unscaled second pass score ")); - - solrParams = new ModifiableSolrParams(); - reRank = "{!rerank reRankDocs=10 reRankScale=0-10 reRankQuery='test_s:hello'}"; - solrParams - .add("q", "test_s:hello") - .add(debugParams[random.nextInt(2)], "true") - .add(CommonParams.RQ, reRank); - queryRequest = new QueryRequest(solrParams); - queryResponse = queryRequest.process(client, COLLECTIONORALIAS); - debug = queryResponse.getDebugMap(); - assertNotNull(debug); - explain = debug.get("explain").toString(); - assertTrue( - explain.contains("10.005078 = combined unscaled first and scaled second pass score ")); + final QueryResponse queryResponse = queryRequest.process(client, COLLECTIONORALIAS); + assertNotNull(queryResponse.getResults().get(0).getFieldValue("test_s")); + return queryResponse; } } diff --git a/solr/core/src/test/org/apache/solr/search/TestReRankQParserPlugin.java b/solr/core/src/test/org/apache/solr/search/TestReRankQParserPlugin.java index 5d769c4e57f..0c53d2fcc70 100644 --- a/solr/core/src/test/org/apache/solr/search/TestReRankQParserPlugin.java +++ b/solr/core/src/test/org/apache/solr/search/TestReRankQParserPlugin.java @@ -1377,7 +1377,7 @@ public void testReRankScaleQueries() throws Exception { + ReRankQParserPlugin.NAME + " " + ReRankQParserPlugin.RERANK_MAIN_SCALE - + "=10-20 " + + "=10-19 " + ReRankQParserPlugin.RERANK_SCALE + "=10-20 " + ReRankQParserPlugin.RERANK_WEIGHT @@ -1400,7 +1400,7 @@ public void testReRankScaleQueries() throws Exception { "//result/doc[1]/str[@name='id'][.='4']", "//result/doc[1]/float[@name='score'][.='30.0']", "//result/doc[2]/str[@name='id'][.='5']", - "//result/doc[2]/float[@name='score'][.='30.0']"); + "//result/doc[2]/float[@name='score'][.='29.0']"); // Test reRank more than found params = new ModifiableSolrParams(); @@ -1410,7 +1410,7 @@ public void testReRankScaleQueries() throws Exception { + ReRankQParserPlugin.NAME + " " + ReRankQParserPlugin.RERANK_MAIN_SCALE - + "=10-20 " + + "=10-19 " + ReRankQParserPlugin.RERANK_SCALE + "=10-20 " + ReRankQParserPlugin.RERANK_WEIGHT @@ -1434,15 +1434,15 @@ public void testReRankScaleQueries() throws Exception { "//result/doc[1]/str[@name='id'][.='4']", "//result/doc[1]/float[@name='score'][.='30.0']", "//result/doc[2]/str[@name='id'][.='5']", - "//result/doc[2]/float[@name='score'][.='30.0']"); + "//result/doc[2]/float[@name='score'][.='29.0']"); String explainResponse = JQ(req(params)); assertTrue(explainResponse.contains("30.0 = combined scaled first and second pass score")); - assertTrue(explainResponse.contains("10.0 = first pass score scaled between: 10-20")); + assertTrue(explainResponse.contains("10.0 = first pass score scaled between: 10-19")); assertTrue(explainResponse.contains("20.0 = second pass score scaled between: 10-20")); - assertTrue(explainResponse.contains("20.0 = first pass score scaled between: 10-20")); + assertTrue(explainResponse.contains("19.0 = first pass score scaled between: 10-19")); assertTrue(explainResponse.contains("10.0 = second pass score scaled between: 10-20")); diff --git a/solr/core/src/test/org/apache/solr/security/AuditLoggerIntegrationTest.java b/solr/core/src/test/org/apache/solr/security/AuditLoggerIntegrationTest.java index 2232ddb508a..47886c25380 100644 --- a/solr/core/src/test/org/apache/solr/security/AuditLoggerIntegrationTest.java +++ b/solr/core/src/test/org/apache/solr/security/AuditLoggerIntegrationTest.java @@ -253,12 +253,11 @@ public void searchWithException() throws Exception { @Test public void illegalAdminPathError() throws Exception { setupCluster(false, null, false); - String baseUrl = testHarness.get().cluster.getJettySolrRunner(0).getBaseUrl().toString(); + String baseUrl = testHarness.get().cluster.getJettySolrRunner(0).getBaseURLV2().toString(); expectThrows( FileNotFoundException.class, () -> { - try (InputStream is = - new URL(baseUrl.replace("/solr", "") + "/api/node/foo").openStream()) { + try (InputStream is = new URL(baseUrl + "/node/foo").openStream()) { new String(is.readAllBytes(), StandardCharsets.UTF_8); } }); diff --git a/solr/core/src/test/org/apache/solr/update/processor/AtomicUpdateJsonTest.java b/solr/core/src/test/org/apache/solr/update/processor/AtomicUpdateJsonTest.java new file mode 100644 index 00000000000..699e01c7c75 --- /dev/null +++ b/solr/core/src/test/org/apache/solr/update/processor/AtomicUpdateJsonTest.java @@ -0,0 +1,184 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.solr.update.processor; + +import java.util.HashMap; +import java.util.Map; +import org.apache.solr.SolrTestCaseJ4; +import org.apache.solr.common.SolrInputDocument; +import org.junit.Before; +import org.junit.BeforeClass; +import org.junit.Test; + +// Tests atomic updates using JSON loader, since the existing +// tests all use XML format, and there have been some atomic update +// issues that were specific to the JSON format. +public class AtomicUpdateJsonTest extends SolrTestCaseJ4 { + + @BeforeClass + public static void beforeTests() throws Exception { + System.setProperty("enable.update.log", "true"); + initCore("solrconfig.xml", "atomic-update-json-test.xml"); + } + + @Before + public void before() { + assertU(delQ("*:*")); + assertU(commit()); + } + + @Test + public void testSchemaIsNotUsableForChildDocs() throws Exception { + // the schema we loaded shouldn't be usable for child docs, + // since we're testing JSON loader functionality that only + // works in that case and is ambiguous if nested docs are supported. + assertFalse(h.getCore().getLatestSchema().isUsableForChildDocs()); + } + + @Test + public void testAddOne() throws Exception { + SolrInputDocument doc = new SolrInputDocument(); + doc.setField("id", "1"); + doc.setField("name", new String[] {"aaa"}); + updateJ(jsonAdd(doc), null); + assertU(commit()); + assertQ(req("q", "name:bbb"), "//result[@numFound = '0']"); + + doc = new SolrInputDocument(); + doc.setField("id", "1"); + doc.setField("name", Map.of("add", "bbb")); + updateJ(jsonAdd(doc), null); + assertU(commit()); + assertQ(req("q", "name:bbb"), "//result[@numFound = '1']"); + } + + @Test + public void testRemoveOne() throws Exception { + SolrInputDocument doc = new SolrInputDocument(); + doc.setField("id", "1"); + doc.setField("name", new String[] {"aaa", "bbb"}); + updateJ(jsonAdd(doc), null); + assertU(commit()); + assertQ(req("q", "name:bbb"), "//result[@numFound = '1']"); + + doc = new SolrInputDocument(); + doc.setField("id", "1"); + doc.setField("name", Map.of("remove", "bbb")); + updateJ(jsonAdd(doc), null); + assertU(commit()); + assertQ(req("q", "name:bbb"), "//result[@numFound = '0']"); + assertQ(req("q", "name:aaa"), "//result[@numFound = '1']"); + } + + @Test + public void testRemoveMultiple() throws Exception { + SolrInputDocument doc = new SolrInputDocument(); + doc.setField("id", "1"); + doc.setField("name", new String[] {"aaa", "bbb", "ccc"}); + updateJ(jsonAdd(doc), null); + assertU(commit()); + assertQ(req("q", "name:bbb"), "//result[@numFound = '1']"); + + doc = new SolrInputDocument(); + doc.setField("id", "1"); + doc.setField( + "name", Map.of("add", new String[] {"ddd", "eee"}, "remove", new String[] {"aaa", "ccc"})); + updateJ(jsonAdd(doc), null); + assertU(commit()); + assertQ(req("q", "name:aaa"), "//result[@numFound = '0']"); + assertQ(req("q", "name:ccc"), "//result[@numFound = '0']"); + assertQ(req("q", "name:bbb"), "//result[@numFound = '1']"); + assertQ(req("q", "name:ddd"), "//result[@numFound = '1']"); + assertQ(req("q", "name:eee"), "//result[@numFound = '1']"); + } + + @Test + public void testAddAndRemove() throws Exception { + SolrInputDocument doc = new SolrInputDocument(); + doc.setField("id", "1"); + doc.setField("name", new String[] {"aaa", "bbb", "ccc"}); + updateJ(jsonAdd(doc), null); + assertU(commit()); + assertQ(req("q", "name:aaa"), "//result[@numFound = '1']"); + assertQ(req("q", "name:bbb"), "//result[@numFound = '1']"); + assertQ(req("q", "name:ccc"), "//result[@numFound = '1']"); + + doc = new SolrInputDocument(); + doc.setField("id", "1"); + doc.setField("name", Map.of("add", "ddd", "remove", "bbb")); + updateJ(jsonAdd(doc), null); + assertU(commit()); + assertQ(req("q", "name:ddd"), "//result[@numFound = '1']"); + assertQ(req("q", "name:bbb"), "//result[@numFound = '0']"); + assertQ(req("q", "name:ccc"), "//result[@numFound = '1']"); + assertQ(req("q", "name:aaa"), "//result[@numFound = '1']"); + } + + @Test + public void testAtomicUpdateModifierNameSingleValued() throws Exception { + // Testing atomic update with a single-valued field named 'set' + SolrInputDocument doc = new SolrInputDocument(); + doc.setField("id", "1"); + doc.setField("set", "setval"); + doc.setField("name", new String[] {"aaa"}); + updateJ(jsonAdd(doc), null); + assertU(commit()); + assertQ(req("q", "set:setval"), "//result[@numFound = '1']"); + + doc = new SolrInputDocument(); + doc.setField("id", "1"); + doc.setField("set", Map.of("set", "modval")); + updateJ(jsonAdd(doc), null); + assertU(commit()); + assertQ(req("q", "set:modval"), "//result[@numFound = '1']"); + assertQ(req("q", "set:setval"), "//result[@numFound = '0']"); + + doc = new SolrInputDocument(); + doc.setField("id", "1"); + Map removeSetField = new HashMap<>(); + removeSetField.put("set", null); + doc.setField("set", removeSetField); + updateJ(jsonAdd(doc), null); + assertU(commit()); + assertQ(req("q", "set:modval"), "//result[@numFound = '0']"); + assertQ(req("q", "name:aaa"), "//result[@numFound = '1']"); + } + + @Test + public void testAtomicUpdateModifierNameMultiValued() throws Exception { + // Testing atomic update with a multi-valued field named 'add' + SolrInputDocument doc = new SolrInputDocument(); + doc.setField("id", "1"); + doc.setField("name", new String[] {"aaa"}); + doc.setField("add", new String[] {"aaa", "bbb", "ccc"}); + updateJ(jsonAdd(doc), null); + assertU(commit()); + assertQ(req("q", "add:bbb"), "//result[@numFound = '1']"); + + doc = new SolrInputDocument(); + doc.setField("id", "1"); + doc.setField( + "add", Map.of("add", new String[] {"ddd", "eee"}, "remove", new String[] {"bbb", "ccc"})); + updateJ(jsonAdd(doc), null); + assertU(commit()); + assertQ(req("q", "add:ddd"), "//result[@numFound = '1']"); + assertQ(req("q", "add:eee"), "//result[@numFound = '1']"); + assertQ(req("q", "add:aaa"), "//result[@numFound = '1']"); + assertQ(req("q", "add:bbb"), "//result[@numFound = '0']"); + assertQ(req("q", "add:ccc"), "//result[@numFound = '0']"); + } +} diff --git a/solr/licenses/commons-codec-1.16.1.jar.sha1 b/solr/licenses/commons-codec-1.16.1.jar.sha1 deleted file mode 100644 index 543eb8aa758..00000000000 --- a/solr/licenses/commons-codec-1.16.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -47bd4d333fba53406f6c6c51884ddbca435c8862 diff --git a/solr/licenses/commons-codec-1.17.0.jar.sha1 b/solr/licenses/commons-codec-1.17.0.jar.sha1 new file mode 100644 index 00000000000..39fd2e0343d --- /dev/null +++ b/solr/licenses/commons-codec-1.17.0.jar.sha1 @@ -0,0 +1 @@ +0dbe8eef6e14460e73da07f7b11bf994d6626355 diff --git a/solr/modules/hdfs/src/test/org/apache/solr/hdfs/cloud/StressHdfsTest.java b/solr/modules/hdfs/src/test/org/apache/solr/hdfs/cloud/StressHdfsTest.java index 72acd6f5343..7bb98b8d97b 100644 --- a/solr/modules/hdfs/src/test/org/apache/solr/hdfs/cloud/StressHdfsTest.java +++ b/solr/modules/hdfs/src/test/org/apache/solr/hdfs/cloud/StressHdfsTest.java @@ -251,4 +251,8 @@ private void createAndDeleteCollection() throws Exception { } } } + + protected String getBaseUrl(SolrClient client) { + return ((HttpSolrClient) client).getBaseURL(); + } } diff --git a/solr/modules/opentelemetry/src/test/org/apache/solr/opentelemetry/BasicAuthIntegrationTracingTest.java b/solr/modules/opentelemetry/src/test/org/apache/solr/opentelemetry/BasicAuthIntegrationTracingTest.java index cdbf9f27141..797f46bda54 100644 --- a/solr/modules/opentelemetry/src/test/org/apache/solr/opentelemetry/BasicAuthIntegrationTracingTest.java +++ b/solr/modules/opentelemetry/src/test/org/apache/solr/opentelemetry/BasicAuthIntegrationTracingTest.java @@ -16,10 +16,7 @@ */ package org.apache.solr.opentelemetry; -import static java.util.Collections.singletonList; -import static java.util.Collections.singletonMap; import static org.apache.solr.opentelemetry.TestDistributedTracing.getAndClearSpans; -import static org.apache.solr.security.Sha256AuthenticationProvider.getSaltedHashedValue; import io.opentelemetry.api.GlobalOpenTelemetry; import io.opentelemetry.api.trace.TracerProvider; @@ -32,7 +29,7 @@ import org.apache.solr.cloud.SolrCloudTestCase; import org.apache.solr.common.util.Utils; import org.apache.solr.security.BasicAuthPlugin; -import org.apache.solr.security.RuleBasedAuthorizationPlugin; +import org.apache.solr.util.SecurityJson; import org.apache.solr.util.tracing.TraceUtils; import org.junit.AfterClass; import org.junit.BeforeClass; @@ -41,27 +38,6 @@ public class BasicAuthIntegrationTracingTest extends SolrCloudTestCase { private static final String COLLECTION = "collection1"; - private static final String USER = "solr"; - private static final String PASS = "SolrRocksAgain"; - private static final String SECURITY_JSON = - Utils.toJSONString( - Map.of( - "authorization", - Map.of( - "class", - RuleBasedAuthorizationPlugin.class.getName(), - "user-role", - singletonMap(USER, "admin"), - "permissions", - singletonList(Map.of("name", "all", "role", "admin"))), - "authentication", - Map.of( - "class", - BasicAuthPlugin.class.getName(), - "blockUnknown", - true, - "credentials", - singletonMap(USER, getSaltedHashedValue(PASS))))); @BeforeClass public static void setupCluster() throws Exception { @@ -72,7 +48,7 @@ public static void setupCluster() throws Exception { .addConfig("config", TEST_PATH().resolve("collection1").resolve("conf")) .withSolrXml(TEST_PATH().resolve("solr.xml")) .withTraceIdGenerationDisabled() - .withSecurityJson(SECURITY_JSON) + .withSecurityJson(SecurityJson.SIMPLE) .configure(); assertNotEquals( @@ -81,7 +57,7 @@ public static void setupCluster() throws Exception { GlobalOpenTelemetry.get().getTracerProvider()); CollectionAdminRequest.createCollection(COLLECTION, "config", 2, 2) - .setBasicAuthCredentials(USER, PASS) + .setBasicAuthCredentials(SecurityJson.USER, SecurityJson.PASS) .process(cluster.getSolrClient()); cluster.waitForActiveCollection(COLLECTION, 2, 4); } @@ -106,7 +82,7 @@ public void testSetupBasicAuth() throws Exception { .withMethod(SolrRequest.METHOD.POST) .withPayload(Utils.toJSONString(ops)) .build(); - req.setBasicAuthCredentials(USER, PASS); + req.setBasicAuthCredentials(SecurityJson.USER, SecurityJson.PASS); assertEquals(0, req.process(cloudClient, COLLECTION).getStatus()); var finishedSpans = getAndClearSpans(); diff --git a/solr/solr-ref-guide/modules/configuration-guide/pages/caches-warming.adoc b/solr/solr-ref-guide/modules/configuration-guide/pages/caches-warming.adoc index 9cab1954eab..e0536003b49 100644 --- a/solr/solr-ref-guide/modules/configuration-guide/pages/caches-warming.adoc +++ b/solr/solr-ref-guide/modules/configuration-guide/pages/caches-warming.adoc @@ -43,7 +43,7 @@ These settings are all configured in child elements of the `` element in Solr caches are associated with a specific instance of an Index Searcher, a specific view of an index that doesn't change during the lifetime of that searcher. As long as that Index Searcher is being used, any items in its cache will be valid and available for reuse. -By default cached Solr objects do not expire after a time interval; instead, they remain valid for the lifetime of the Index Searcher. +By default, cached Solr objects do not expire after a time interval; instead, they remain valid for the lifetime of the Index Searcher. Idle time-based expiration can be enabled by using `maxIdleTime` option. When a new searcher is opened, the current searcher continues servicing requests while the new one auto-warms its cache. @@ -56,8 +56,8 @@ The old searcher will be closed once it has finished servicing all its requests. Solr comes with a default `SolrCache` implementation that is used for different types of caches. The `CaffeineCache` is an implementation backed by the https://github.com/ben-manes/caffeine[Caffeine caching library]. -By default it uses a Window TinyLFU (W-TinyLFU) eviction policy, which allows the eviction based on both frequency and recency of use in O(1) time with a small footprint. -Generally this cache usually offers lower memory footprint, higher hit ratio, and better multi-threaded performance over legacy caches. +By default, it uses a Window TinyLFU (W-TinyLFU) eviction policy, which allows the eviction based on both frequency and recency of use in O(1) time with a small footprint. +Generally this cache usually offers lower memory footprint, higher hit ratio, and better multithreaded performance over legacy caches. `CaffeineCache` uses an auto-warm count that supports both integers and percentages which get evaluated relative to the current size of the cache when warming happens. @@ -86,7 +86,6 @@ The async cache provides most significant improvement with many concurrent queri However, the async cache will not prevent data races for time-limited queries, since those are expected to provide partial results. All caches can be disabled using the parameter `enabled` with a value of `false`. -Caches can also be disabled on a query-by-query basis with the `cache` parameter, as described in the section xref:query-guide:common-query-parameters.adoc#cache-local-parameter[cache Local Parameter]. Details of each cache are described below. @@ -95,10 +94,12 @@ Details of each cache are described below. This cache holds parsed queries paired with an unordered set of all documents that match it. Unless such a set is trivially small, the set implementation is a bitset. -The most typical way Solr uses the `filterCache` is to cache results of each `fq` search parameter, though there are some other cases as well. +The most typical way Solr uses the `filterCache` is to cache results of each `fq` search parameter, though there are some other use cases as well. Subsequent queries using the same parameter filter query result in cache hits and rapid returns of results. See xref:query-guide:common-query-parameters.adoc#fq-filter-query-parameter[fq (Filter Query) Parameter] for a detailed discussion of `fq`. -Use of this cache can be disabled for a `fq` using the xref:query-guide:common-query-parameters.adoc#cache-local-parameter[`cache` local parameter]. + +[TIP] +Use of this cache can be disabled for a specific query's `fq` using the xref:query-guide:common-query-parameters.adoc#cache-local-parameter[`cache` local parameter]. Another Solr feature using this cache is the `filter(...)` syntax in the default Lucene query parser. @@ -144,8 +145,6 @@ This lets you specify the maximum heap size, in megabytes, used by the contents When the cache grows beyond this size, oldest accessed queries will be evicted until the heap usage of the cache decreases below the specified limit. If a `size` is specified in addition to `maxRamMB` then only the heap usage limit is respected. -Use of this cache can be disabled on a query-by-query basis in `q` using the xref:query-guide:common-query-parameters.adoc#cache-local-parameter[`cache` local parameter]. - [source,xml] ---- Element This setting controls whether search requests for which there is not a currently registered searcher should wait for a new searcher to warm up (`false`) or proceed immediately (`true`). -When set to "false`, requests will block until the searcher has warmed its caches. +When set to `false`, requests will block until the searcher has warmed its caches. [source,xml] ---- diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/system-requirements.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/system-requirements.adoc index 0b5ae62ba5d..a38a7124147 100644 --- a/solr/solr-ref-guide/modules/deployment-guide/pages/system-requirements.adoc +++ b/solr/solr-ref-guide/modules/deployment-guide/pages/system-requirements.adoc @@ -55,7 +55,7 @@ We recommend you read the article https://medium.com/@javachampions/java-is-stil The Solr project does not endorse any particular provider of Java. -NOTE: While we reference the Java Development (JDK) on this page, any Java Runtime Environment (JRE) associated with the referenced JDKs is acceptable. +NOTE: While we reference the Java Development Kit (JDK) on this page, any Java Runtime Environment (JRE) associated with the referenced JDKs is acceptable. == Java and Solr Combinations diff --git a/solr/solr-ref-guide/modules/upgrade-notes/pages/major-changes-in-solr-10.adoc b/solr/solr-ref-guide/modules/upgrade-notes/pages/major-changes-in-solr-10.adoc index a712bbe54e8..e1491a6d74f 100644 --- a/solr/solr-ref-guide/modules/upgrade-notes/pages/major-changes-in-solr-10.adoc +++ b/solr/solr-ref-guide/modules/upgrade-notes/pages/major-changes-in-solr-10.adoc @@ -55,3 +55,7 @@ has been removed. Please use `-Dsolr.hiddenSysProps` or the envVar `SOLR_HIDDEN_ * The node configuration file `/solr.xml` can no longer be loaded from Zookeeper. Solr startup will fail if it is present. * The legacy Circuit Breaker named `CircuitBreakerManager`, is removed. Please use individual Circuit Breaker plugins instead. + +* The `BlobRepository`, which was deprecated in 8x in favour of the `FileStore` approach is removed. +Users should migrate to the `FileStore` implementation (per node stored file) and the still existing `BlobHandler` (across the cluster storage backed by `.system` collection). +Please note this also removes the ability to share resource intensive objects across multiple cores as this feature was tied to the `BlobRepository` implementation. diff --git a/solr/solrj-zookeeper/src/java/org/apache/solr/client/solrj/impl/ZkClientClusterStateProvider.java b/solr/solrj-zookeeper/src/java/org/apache/solr/client/solrj/impl/ZkClientClusterStateProvider.java index bc56881e86d..f9d202f5949 100644 --- a/solr/solrj-zookeeper/src/java/org/apache/solr/client/solrj/impl/ZkClientClusterStateProvider.java +++ b/solr/solrj-zookeeper/src/java/org/apache/solr/client/solrj/impl/ZkClientClusterStateProvider.java @@ -144,21 +144,10 @@ public static ClusterState createFromJsonSupportingLegacyConfigName( @Override public ClusterState.CollectionRef getState(String collection) { ClusterState clusterState = getZkStateReader().getClusterState(); - if (clusterState == null) { - return null; - } - - ClusterState.CollectionRef collectionRef = clusterState.getCollectionRef(collection); - if (collectionRef == null) { - // force update collection - try { - getZkStateReader().forceUpdateCollection(collection); - return getZkStateReader().getClusterState().getCollectionRef(collection); - } catch (KeeperException | InterruptedException e) { - return null; - } + if (clusterState != null) { + return clusterState.getCollectionRef(collection); } else { - return collectionRef; + return null; } } diff --git a/solr/solrj-zookeeper/src/java/org/apache/solr/common/cloud/ZkStateReader.java b/solr/solrj-zookeeper/src/java/org/apache/solr/common/cloud/ZkStateReader.java index 32ec0b2f323..b9882ddcc11 100644 --- a/solr/solrj-zookeeper/src/java/org/apache/solr/common/cloud/ZkStateReader.java +++ b/solr/solrj-zookeeper/src/java/org/apache/solr/common/cloud/ZkStateReader.java @@ -119,7 +119,7 @@ public class ZkStateReader implements SolrCloseable { /** * This ZooKeeper file is no longer used starting with Solr 9 but keeping the name around to check - * if it is still present and non empty (in case of upgrade from previous Solr version). It used + * if it is still present and non-empty (in case of upgrade from previous Solr version). It used * to contain collection state for all collections in the cluster. */ public static final String UNSUPPORTED_CLUSTER_STATE = "/clusterstate.json"; @@ -163,7 +163,6 @@ public class ZkStateReader implements SolrCloseable { private static final int GET_LEADER_RETRY_INTERVAL_MS = 50; private static final int GET_LEADER_RETRY_DEFAULT_TIMEOUT = Integer.parseInt(System.getProperty("zkReaderGetLeaderRetryTimeoutMs", "4000")); - ; public static final String LEADER_ELECT_ZKNODE = "leader_elect"; @@ -1331,10 +1330,24 @@ public ConfigData getSecurityProps(boolean getFresh) { * Returns the baseURL corresponding to a given node's nodeName -- NOTE: does not (currently) * imply that the nodeName (or resulting baseURL) exists in the cluster. * - * @lucene.experimental + * @param nodeName name of the node + * @return url that looks like {@code https://localhost:8983/solr} */ public String getBaseUrlForNodeName(final String nodeName) { - return Utils.getBaseUrlForNodeName(nodeName, getClusterProperty(URL_SCHEME, "http")); + String urlScheme = getClusterProperty(URL_SCHEME, "http"); + return Utils.getBaseUrlForNodeName(nodeName, urlScheme, false); + } + + /** + * Returns the V2 baseURL corresponding to a given node's nodeName -- NOTE: does not (currently) + * imply that the nodeName (or resulting baseURL) exists in the cluster. + * + * @param nodeName name of the node + * @return url that looks like {@code https://localhost:8983/api} + */ + public String getBaseUrlV2ForNodeName(final String nodeName) { + String urlScheme = getClusterProperty(URL_SCHEME, "http"); + return Utils.getBaseUrlForNodeName(nodeName, urlScheme, true); } /** Watches a single collection's state.json. */ @@ -2265,7 +2278,7 @@ public void applyModificationAndExportToZk(UnaryOperator op) { } /** - * Ensures the internal aliases is up to date. If there is a change, return true. + * Ensures the internal aliases is up-to-date. If there is a change, return true. * * @return true if an update was performed */ @@ -2273,7 +2286,7 @@ public boolean update() throws KeeperException, InterruptedException { if (log.isDebugEnabled()) { log.debug("Checking ZK for most up to date Aliases {}", ALIASES); } - // Call sync() first to ensure the subsequent read (getData) is up to date. + // Call sync() first to ensure the subsequent read (getData) is up-to-date. zkClient.getZooKeeper().sync(ALIASES, null, null); return setIfNewer(zkClient.getNode(ALIASES, null, true)); } diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/impl/Http2SolrClient.java b/solr/solrj/src/java/org/apache/solr/client/solrj/impl/Http2SolrClient.java index e437e2f0a18..74fa2b2aafd 100644 --- a/solr/solrj/src/java/org/apache/solr/client/solrj/impl/Http2SolrClient.java +++ b/solr/solrj/src/java/org/apache/solr/client/solrj/impl/Http2SolrClient.java @@ -112,8 +112,6 @@ public class Http2SolrClient extends HttpSolrClientBase { private final HttpClient httpClient; - private SSLConfig sslConfig; - private List listenerFactory = new ArrayList<>(); private final AsyncTracker asyncTracker = new AsyncTracker(); @@ -145,14 +143,11 @@ protected Http2SolrClient(String serverBaseUrl, Builder builder) { assert ObjectReleaseTracker.track(this); } + @Deprecated(since = "9.7") public void addListenerFactory(HttpListenerFactory factory) { this.listenerFactory.add(factory); } - public List getListenerFactory() { - return listenerFactory; - } - // internal usage only HttpClient getHttpClient() { return httpClient; @@ -345,7 +340,7 @@ boolean belongToThisStream(SolrRequest solrRequest, String collection) { && Objects.equals(origCollection, collection); } - public void write(byte b[]) throws IOException { + public void write(byte[] b) throws IOException { this.content.getOutputStream().write(b); } @@ -427,7 +422,6 @@ public CompletableFuture> requestAsync( if (ClientUtils.shouldApplyDefaultCollection(collection, solrRequest)) { collection = defaultCollection; } - MDCCopyHelper mdcCopyHelper = new MDCCopyHelper(); CompletableFuture> future = new CompletableFuture<>(); final MakeRequestReturnValue mrrv; final String url; @@ -438,13 +432,14 @@ public CompletableFuture> requestAsync( future.completeExceptionally(e); return future; } - final ResponseParser parser = - solrRequest.getResponseParser() == null ? this.parser : solrRequest.getResponseParser(); mrrv.request .onRequestQueued(asyncTracker.queuedListener) .onComplete(asyncTracker.completeListener) .send( new InputStreamResponseListener() { + // MDC snapshot from requestAsync's thread + MDCCopyHelper mdcCopyHelper = new MDCCopyHelper(); + @Override public void onHeaders(Response response) { super.onHeaders(response); @@ -503,7 +498,7 @@ public NamedList request(SolrRequest solrRequest, String collection) Request req = null; try { InputStreamResponseListener listener = new InputStreamReleaseTrackingResponseListener(); - req = makeRequestAndSend(solrRequest, url, listener, false); + req = sendRequest(makeRequest(solrRequest, url, false), listener); Response response = listener.get(idleTimeoutMillis, TimeUnit.MILLISECONDS); url = req.getURI().toString(); InputStream is = listener.getInputStream(); @@ -601,10 +596,7 @@ private void decorateRequest(Request req, SolrRequest solrRequest, boolean is Map headers = solrRequest.getHeaders(); if (headers != null) { - req.headers( - h -> - headers.entrySet().stream() - .forEach(entry -> h.add(entry.getKey(), entry.getValue()))); + req.headers(h -> headers.forEach(h::add)); } } @@ -629,12 +621,6 @@ private static class MakeRequestReturnValue { } } - private Request makeRequestAndSend( - SolrRequest solrRequest, String url, InputStreamResponseListener listener, boolean isAsync) - throws IOException, SolrServerException { - return sendRequest(makeRequest(solrRequest, url, isAsync), listener); - } - private MakeRequestReturnValue makeRequest( SolrRequest solrRequest, String url, boolean isAsync) throws IOException, SolrServerException { @@ -851,11 +837,6 @@ public static class Builder protected Long keyStoreReloadIntervalSecs; - public Http2SolrClient.Builder withListenerFactory(List listenerFactory) { - this.listenerFactory = listenerFactory; - return this; - } - private List listenerFactory; public Builder() { @@ -882,6 +863,11 @@ public Builder(String baseSolrUrl) { this.baseSolrUrl = baseSolrUrl; } + public Http2SolrClient.Builder withListenerFactory(List listenerFactory) { + this.listenerFactory = listenerFactory; + return this; + } + public HttpSolrClientBuilderBase withSSLConfig( SSLConfig sslConfig) { this.sslConfig = sslConfig; @@ -1054,6 +1040,10 @@ public Builder withHttpClient(Http2SolrClient http2SolrClient) { if (this.urlParamNames == null) { this.urlParamNames = http2SolrClient.urlParamNames; } + if (this.listenerFactory == null) { + this.listenerFactory = new ArrayList(); + http2SolrClient.listenerFactory.forEach(this.listenerFactory::add); + } return this; } diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpJdkSolrClient.java b/solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpJdkSolrClient.java index 034e16ac679..5a5b72e87c9 100644 --- a/solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpJdkSolrClient.java +++ b/solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpJdkSolrClient.java @@ -238,7 +238,7 @@ private PreparedRequest prepareGet( validateGetRequest(solrRequest); reqb.GET(); decorateRequest(reqb, solrRequest); - reqb.uri(new URI(url + "?" + queryParams)); + reqb.uri(new URI(url + queryParams.toQueryString())); return new PreparedRequest(reqb, null); } @@ -298,13 +298,17 @@ private PreparedRequest preparePutOrPost( InputStream is = streams.iterator().next().getStream(); bodyPublisher = HttpRequest.BodyPublishers.ofInputStream(() -> is); - } else if (queryParams != null && urlParamNames != null) { - ModifiableSolrParams requestParams = queryParams; - queryParams = calculateQueryParams(urlParamNames, requestParams); - queryParams.add(calculateQueryParams(solrRequest.getQueryParams(), requestParams)); - bodyPublisher = HttpRequest.BodyPublishers.ofString(requestParams.toString()); } else { - bodyPublisher = HttpRequest.BodyPublishers.noBody(); + // move any params specified in urlParamNames or solrRequest from queryParams into urlParams + ModifiableSolrParams urlParams = calculateQueryParams(urlParamNames, queryParams); + urlParams.add(calculateQueryParams(solrRequest.getQueryParams(), queryParams)); + + // put the remaining params in the request body + // note the toQueryString() method adds a leading question mark which needs to be removed here + bodyPublisher = HttpRequest.BodyPublishers.ofString(queryParams.toQueryString().substring(1)); + + // replace queryParams with the selected set + queryParams = urlParams; } decorateRequest(reqb, solrRequest); @@ -313,7 +317,7 @@ private PreparedRequest preparePutOrPost( } else { reqb.method("POST", bodyPublisher); } - URI uriWithQueryParams = new URI(url + "?" + queryParams); + URI uriWithQueryParams = new URI(url + queryParams.toQueryString()); reqb.uri(uriWithQueryParams); return new PreparedRequest(reqb, contentWritingFuture); diff --git a/solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpSolrClient.java b/solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpSolrClient.java index 805bbd5f6f9..b2a08a7912b 100644 --- a/solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpSolrClient.java +++ b/solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpSolrClient.java @@ -374,7 +374,7 @@ protected HttpRequestBase createMethod(SolrRequest request, String collection if (request instanceof V2Request) { if (System.getProperty("solr.v2RealPath") == null || ((V2Request) request).isForceV2()) { - basePath = baseUrl.replace("/solr", "/api"); + basePath = changeV2RequestEndpoint(baseUrl); } else { basePath = baseUrl + "/____v2"; } @@ -801,6 +801,7 @@ public ModifiableSolrParams getInvariantParams() { return invariantParams; } + /** Typically looks like {@code http://localhost:8983/solr} (no core or collection) */ public String getBaseURL() { return baseUrl; } diff --git a/solr/solrj/src/java/org/apache/solr/common/cloud/ClusterState.java b/solr/solrj/src/java/org/apache/solr/common/cloud/ClusterState.java index 83898f57631..5f7a23f2ed0 100644 --- a/solr/solrj/src/java/org/apache/solr/common/cloud/ClusterState.java +++ b/solr/solrj/src/java/org/apache/solr/common/cloud/ClusterState.java @@ -25,8 +25,11 @@ import java.util.Collections; import java.util.HashMap; import java.util.LinkedHashMap; +import java.util.List; import java.util.Map; import java.util.Map.Entry; +import java.util.Objects; +import java.util.Optional; import java.util.Set; import java.util.concurrent.atomic.AtomicInteger; import java.util.function.Consumer; @@ -176,34 +179,53 @@ public Set getLiveNodes() { return Collections.unmodifiableSet(liveNodes); } + @Deprecated public String getShardId(String nodeName, String coreName) { return getShardId(null, nodeName, coreName); } + @Deprecated public String getShardId(String collectionName, String nodeName, String coreName) { - Collection states = collectionStates.values(); + if (coreName == null || nodeName == null) { + return null; + } + Collection states = Collections.emptyList(); if (collectionName != null) { CollectionRef c = collectionStates.get(collectionName); if (c != null) states = Collections.singletonList(c); + } else { + states = collectionStates.values(); } for (CollectionRef ref : states) { DocCollection coll = ref.get(); - if (coll == null) continue; // this collection go tremoved in between, skip - for (Slice slice : coll.getSlices()) { - for (Replica replica : slice.getReplicas()) { - // TODO: for really large clusters, we could 'index' on this - String rnodeName = replica.getStr(ReplicaStateProps.NODE_NAME); - String rcore = replica.getStr(ReplicaStateProps.CORE_NAME); - if (nodeName.equals(rnodeName) && coreName.equals(rcore)) { - return slice.getName(); - } - } - } + if (coll == null) continue; // this collection got removed in between, skip + // TODO: for really large clusters, we could 'index' on this + return Optional.ofNullable(coll.getReplicas(nodeName)).stream() + .flatMap(List::stream) + .filter(r -> coreName.equals(r.getStr(ReplicaStateProps.CORE_NAME))) + .map(Replica::getShard) + .findAny() + .orElse(null); } return null; } + public Map> getReplicaNamesPerCollectionOnNode(final String nodeName) { + Map> replicaNamesPerCollectionOnNode = new HashMap<>(); + collectionStates.values().stream() + .map(CollectionRef::get) + .filter(Objects::nonNull) + .forEach( + col -> { + List replicas = col.getReplicas(nodeName); + if (replicas != null && !replicas.isEmpty()) { + replicaNamesPerCollectionOnNode.put(col.getName(), replicas); + } + }); + return replicaNamesPerCollectionOnNode; + } + /** Check if node is alive. */ public boolean liveNodesContain(String name) { return liveNodes.contains(name); diff --git a/solr/solrj/src/java/org/apache/solr/common/cloud/Replica.java b/solr/solrj/src/java/org/apache/solr/common/cloud/Replica.java index 0d4cd3afffb..e9c41df8c51 100644 --- a/solr/solrj/src/java/org/apache/solr/common/cloud/Replica.java +++ b/solr/solrj/src/java/org/apache/solr/common/cloud/Replica.java @@ -369,7 +369,7 @@ public String getProperty(String propertyName) { } public Replica copyWith(PerReplicaStates.State state) { - log.debug("A replica is updated with new state : {}", state); + log.debug("A replica is updated with new PRS state : {}", state); Map props = new LinkedHashMap<>(propMap); if (state == null) { props.put(ReplicaStateProps.STATE, State.DOWN.toString()); @@ -382,6 +382,12 @@ public Replica copyWith(PerReplicaStates.State state) { return r; } + public Replica copyWith(State state) { + Replica r = new Replica(name, propMap, collection, shard); + r.setState(state); + return r; + } + public PerReplicaStates.State getReplicaState() { if (perReplicaStatesRef != null) { return perReplicaStatesRef.get().get(name); diff --git a/solr/solrj/src/java/org/apache/solr/common/util/Utils.java b/solr/solrj/src/java/org/apache/solr/common/util/Utils.java index 4a1b39b378c..98899633bff 100644 --- a/solr/solrj/src/java/org/apache/solr/common/util/Utils.java +++ b/solr/solrj/src/java/org/apache/solr/common/util/Utils.java @@ -19,6 +19,7 @@ import static java.nio.charset.StandardCharsets.UTF_8; import static java.util.Collections.singletonList; import static java.util.concurrent.TimeUnit.NANOSECONDS; +import static org.apache.solr.common.SolrException.ErrorCode.SERVER_ERROR; import com.fasterxml.jackson.annotation.JsonAnyGetter; import java.io.ByteArrayInputStream; @@ -38,10 +39,13 @@ import java.lang.reflect.Field; import java.lang.reflect.Method; import java.lang.reflect.Modifier; +import java.math.BigInteger; import java.net.URL; import java.nio.BufferOverflowException; import java.nio.ByteBuffer; import java.nio.charset.StandardCharsets; +import java.security.MessageDigest; +import java.security.NoSuchAlgorithmException; import java.util.AbstractMap; import java.util.ArrayList; import java.util.Arrays; @@ -52,8 +56,10 @@ import java.util.HashSet; import java.util.LinkedHashMap; import java.util.List; +import java.util.Locale; import java.util.Map; import java.util.Objects; +import java.util.Random; import java.util.Set; import java.util.TreeMap; import java.util.TreeSet; @@ -90,6 +96,31 @@ public class Utils { private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass()); + public static final Random RANDOM; + + static { + // We try to make things reproducible in the context of our tests by initializing the random + // instance based on the current seed + String seed = System.getProperty("tests.seed"); + if (seed == null) { + RANDOM = new Random(); + } else { + RANDOM = new Random(seed.hashCode()); + } + } + + public static String sha512Digest(ByteBuffer byteBuffer) { + MessageDigest digest; + try { + digest = MessageDigest.getInstance("SHA-512"); + } catch (NoSuchAlgorithmException e) { + // unlikely + throw new SolrException(SERVER_ERROR, e); + } + digest.update(byteBuffer); + return String.format(Locale.ROOT, "%0128x", new BigInteger(1, digest.digest())); + } + @SuppressWarnings({"rawtypes"}) public static Map getDeepCopy(Map map, int maxDepth) { return getDeepCopy(map, maxDepth, true, false); @@ -716,10 +747,30 @@ public static String applyUrlScheme(final String url, final String urlScheme) { return (at == -1) ? (urlScheme + "://" + url) : urlScheme + url.substring(at); } + /** + * Construct a V1 base url for the Solr node, given its name (e.g., 'app-node-1:8983_solr') and a + * URL scheme. + * + * @param nodeName name of the Solr node + * @param urlScheme scheme for the base url ('http' or 'https') + * @return url that looks like {@code https://app-node-1:8983/solr} + * @throws IllegalArgumentException if the provided node name is malformed + */ public static String getBaseUrlForNodeName(final String nodeName, final String urlScheme) { return getBaseUrlForNodeName(nodeName, urlScheme, false); } + /** + * Construct a V1 or a V2 base url for the Solr node, given its name (e.g., + * 'app-node-1:8983_solr') and a URL scheme. + * + * @param nodeName name of the Solr node + * @param urlScheme scheme for the base url ('http' or 'https') + * @param isV2 whether a V2 url should be constructed + * @return url that looks like {@code https://app-node-1:8983/api} (V2) or {@code + * https://app-node-1:8983/solr} (V1) + * @throws IllegalArgumentException if the provided node name is malformed + */ public static String getBaseUrlForNodeName( final String nodeName, final String urlScheme, boolean isV2) { final int colonAt = nodeName.indexOf(':'); diff --git a/solr/solrj/src/test/org/apache/solr/client/solrj/impl/Http2SolrClientTest.java b/solr/solrj/src/test/org/apache/solr/client/solrj/impl/Http2SolrClientTest.java index b6642955c4a..384908a0547 100644 --- a/solr/solrj/src/test/org/apache/solr/client/solrj/impl/Http2SolrClientTest.java +++ b/solr/solrj/src/test/org/apache/solr/client/solrj/impl/Http2SolrClientTest.java @@ -130,7 +130,7 @@ protected void testQuerySetup(SolrRequest.METHOD method, ResponseParser rp) thro DebugServlet.clear(); String url = getBaseUrl() + DEBUG_SERVLET_PATH; SolrQuery q = new SolrQuery("foo"); - q.setParam("a", "\u1234"); + q.setParam("a", MUST_ENCODE); Http2SolrClient.Builder b = new Http2SolrClient.Builder(url).withDefaultCollection(DEFAULT_CORE); if (rp != null) { @@ -233,7 +233,7 @@ public void testUpdateDefault() throws Exception { String url = getBaseUrl() + DEBUG_SERVLET_PATH; try (Http2SolrClient client = new Http2SolrClient.Builder(url).withDefaultCollection(DEFAULT_CORE).build()) { - testUpdate(client, WT.JAVABIN, "application/javabin", "\u1234"); + testUpdate(client, WT.JAVABIN, "application/javabin", MUST_ENCODE); } } @@ -246,7 +246,7 @@ public void testUpdateXml() throws Exception { .withRequestWriter(new RequestWriter()) .withResponseParser(new XMLResponseParser()) .build()) { - testUpdate(client, WT.XML, "application/xml; charset=UTF-8", "\u1234"); + testUpdate(client, WT.XML, "application/xml; charset=UTF-8", MUST_ENCODE); } } @@ -259,7 +259,7 @@ public void testUpdateJavabin() throws Exception { .withRequestWriter(new BinaryRequestWriter()) .withResponseParser(new BinaryResponseParser()) .build()) { - testUpdate(client, WT.JAVABIN, "application/javabin", "\u1234"); + testUpdate(client, WT.JAVABIN, "application/javabin", MUST_ENCODE); } } diff --git a/solr/solrj/src/test/org/apache/solr/client/solrj/impl/HttpJdkSolrClientTest.java b/solr/solrj/src/test/org/apache/solr/client/solrj/impl/HttpJdkSolrClientTest.java index 789ff090dd1..9821a8ec849 100644 --- a/solr/solrj/src/test/org/apache/solr/client/solrj/impl/HttpJdkSolrClientTest.java +++ b/solr/solrj/src/test/org/apache/solr/client/solrj/impl/HttpJdkSolrClientTest.java @@ -172,7 +172,7 @@ protected void testQuerySetup(SolrRequest.METHOD method, ResponseParser rp) thro } String url = getBaseUrl() + DEBUG_SERVLET_PATH; SolrQuery q = new SolrQuery("foo"); - q.setParam("a", "\u1234"); + q.setParam("a", MUST_ENCODE); HttpJdkSolrClient.Builder b = builder(url); if (rp != null) { b.withResponseParser(rp); @@ -313,7 +313,7 @@ public void testSolrExceptionWithNullBaseurl() throws IOException, SolrServerExc public void testUpdateDefault() throws Exception { String url = getBaseUrl() + DEBUG_SERVLET_PATH; try (HttpJdkSolrClient client = builder(url).build()) { - testUpdate(client, WT.JAVABIN, "application/javabin", "\u1234"); + testUpdate(client, WT.JAVABIN, "application/javabin", MUST_ENCODE); } } @@ -364,7 +364,7 @@ public void testUpdateJavabin() throws Exception { .withRequestWriter(new BinaryRequestWriter()) .withResponseParser(new BinaryResponseParser()) .build()) { - testUpdate(client, WT.JAVABIN, "application/javabin", "\u1234"); + testUpdate(client, WT.JAVABIN, "application/javabin", MUST_ENCODE); assertNoHeadRequestWithSsl(client); } } diff --git a/solr/solrj/src/test/org/apache/solr/client/solrj/impl/HttpSolrClientTestBase.java b/solr/solrj/src/test/org/apache/solr/client/solrj/impl/HttpSolrClientTestBase.java index 2c8caec0fdf..ec14e871e67 100644 --- a/solr/solrj/src/test/org/apache/solr/client/solrj/impl/HttpSolrClientTestBase.java +++ b/solr/solrj/src/test/org/apache/solr/client/solrj/impl/HttpSolrClientTestBase.java @@ -21,6 +21,7 @@ import java.io.IOException; import java.io.InputStream; +import java.net.URLEncoder; import java.nio.charset.StandardCharsets; import java.util.ArrayList; import java.util.Base64; @@ -63,6 +64,8 @@ public abstract class HttpSolrClientTestBase extends SolrJettyTestBase { protected static final String REDIRECT_SERVLET_PATH = "/redirect"; protected static final String REDIRECT_SERVLET_REGEX = REDIRECT_SERVLET_PATH + "/*"; protected static final String COLLECTION_1 = "collection1"; + // example chars that must be URI encoded - non-ASCII and curly quote + protected static final String MUST_ENCODE = "\u1234\u007B"; @BeforeClass public static void beforeTest() throws Exception { @@ -112,7 +115,7 @@ public void testQueryGet() throws Exception { assertNull(DebugServlet.headers.get("content-type")); // param encoding assertEquals(1, DebugServlet.parameters.get("a").length); - assertEquals("\u1234", DebugServlet.parameters.get("a")[0]); + assertEquals(MUST_ENCODE, DebugServlet.parameters.get("a")[0]); } public void testQueryPost() throws Exception { @@ -124,9 +127,11 @@ public void testQueryPost() throws Exception { assertEquals("javabin", DebugServlet.parameters.get(CommonParams.WT)[0]); assertEquals(1, DebugServlet.parameters.get(CommonParams.VERSION).length); assertEquals(1, DebugServlet.parameters.get("a").length); - assertEquals("\u1234", DebugServlet.parameters.get("a")[0]); + assertEquals(MUST_ENCODE, DebugServlet.parameters.get("a")[0]); assertEquals(expectedUserAgent(), DebugServlet.headers.get("user-agent")); assertEquals("application/x-www-form-urlencoded", DebugServlet.headers.get("content-type")); + // this validates that URI encoding has been applied - the content-length is smaller if not + assertEquals("41", DebugServlet.headers.get("content-length")); } public void testQueryPut() throws Exception { @@ -138,9 +143,10 @@ public void testQueryPut() throws Exception { assertEquals("javabin", DebugServlet.parameters.get(CommonParams.WT)[0]); assertEquals(1, DebugServlet.parameters.get(CommonParams.VERSION).length); assertEquals(1, DebugServlet.parameters.get("a").length); - assertEquals("\u1234", DebugServlet.parameters.get("a")[0]); + assertEquals(MUST_ENCODE, DebugServlet.parameters.get("a")[0]); assertEquals(expectedUserAgent(), DebugServlet.headers.get("user-agent")); assertEquals("application/x-www-form-urlencoded", DebugServlet.headers.get("content-type")); + assertEquals("41", DebugServlet.headers.get("content-length")); } public void testQueryXmlGet() throws Exception { @@ -152,7 +158,7 @@ public void testQueryXmlGet() throws Exception { assertEquals("xml", DebugServlet.parameters.get(CommonParams.WT)[0]); assertEquals(1, DebugServlet.parameters.get(CommonParams.VERSION).length); assertEquals(1, DebugServlet.parameters.get("a").length); - assertEquals("\u1234", DebugServlet.parameters.get("a")[0]); + assertEquals(MUST_ENCODE, DebugServlet.parameters.get("a")[0]); assertEquals(expectedUserAgent(), DebugServlet.headers.get("user-agent")); } @@ -165,7 +171,7 @@ public void testQueryXmlPost() throws Exception { assertEquals("xml", DebugServlet.parameters.get(CommonParams.WT)[0]); assertEquals(1, DebugServlet.parameters.get(CommonParams.VERSION).length); assertEquals(1, DebugServlet.parameters.get("a").length); - assertEquals("\u1234", DebugServlet.parameters.get("a")[0]); + assertEquals(MUST_ENCODE, DebugServlet.parameters.get("a")[0]); assertEquals(expectedUserAgent(), DebugServlet.headers.get("user-agent")); assertEquals("application/x-www-form-urlencoded", DebugServlet.headers.get("content-type")); } @@ -179,7 +185,7 @@ public void testQueryXmlPut() throws Exception { assertEquals("xml", DebugServlet.parameters.get(CommonParams.WT)[0]); assertEquals(1, DebugServlet.parameters.get(CommonParams.VERSION).length); assertEquals(1, DebugServlet.parameters.get("a").length); - assertEquals("\u1234", DebugServlet.parameters.get("a")[0]); + assertEquals(MUST_ENCODE, DebugServlet.parameters.get("a")[0]); assertEquals(expectedUserAgent(), DebugServlet.headers.get("user-agent")); assertEquals("application/x-www-form-urlencoded", DebugServlet.headers.get("content-type")); } @@ -283,7 +289,8 @@ protected void testUpdate(HttpSolrClientBase client, WT wt, String contentType, SolrInputDocument doc = new SolrInputDocument(); doc.addField("id", docIdValue); req.add(doc); - req.setParam("a", "\u1234"); + // non-ASCII characters and curly quotes should be URI-encoded + req.setParam("a", MUST_ENCODE); try { client.request(req); @@ -300,7 +307,7 @@ protected void testUpdate(HttpSolrClientBase client, WT wt, String contentType, client.getParser().getVersion(), DebugServlet.parameters.get(CommonParams.VERSION)[0]); assertEquals(contentType, DebugServlet.headers.get("content-type")); assertEquals(1, DebugServlet.parameters.get("a").length); - assertEquals("\u1234", DebugServlet.parameters.get("a")[0]); + assertEquals(MUST_ENCODE, DebugServlet.parameters.get("a")[0]); if (wt == WT.XML) { String requestBody = new String(DebugServlet.requestBody, StandardCharsets.UTF_8); @@ -337,12 +344,14 @@ protected void testCollectionParameters( protected void setReqParamsOf(UpdateRequest req, String... keys) { if (keys != null) { for (String k : keys) { - req.setParam(k, k + "Value"); + // note inclusion of non-ASCII character, and curly quotes which should be URI encoded + req.setParam(k, k + "Value" + MUST_ENCODE); } } } - protected void verifyServletState(HttpSolrClientBase client, SolrRequest request) { + protected void verifyServletState(HttpSolrClientBase client, SolrRequest request) + throws Exception { // check query String Iterator paramNames = request.getParams().getParameterNamesIterator(); while (paramNames.hasNext()) { @@ -354,7 +363,9 @@ protected void verifyServletState(HttpSolrClientBase client, SolrRequest requ client.getUrlParamNames().contains(name) || (request.getQueryParams() != null && request.getQueryParams().contains(name)); assertEquals( - shouldBeInQueryString, DebugServlet.queryString.contains(name + "=" + value)); + shouldBeInQueryString, + DebugServlet.queryString.contains( + name + "=" + URLEncoder.encode(value, StandardCharsets.UTF_8.name()))); // in either case, it should be in the parameters assertNotNull(DebugServlet.parameters.get(name)); assertEquals(1, DebugServlet.parameters.get(name).length); diff --git a/solr/test-framework/src/java/org/apache/solr/cloud/AbstractBasicDistributedZkTestBase.java b/solr/test-framework/src/java/org/apache/solr/cloud/AbstractBasicDistributedZkTestBase.java index 96d0c9952a3..91fe79b6c73 100644 --- a/solr/test-framework/src/java/org/apache/solr/cloud/AbstractBasicDistributedZkTestBase.java +++ b/solr/test-framework/src/java/org/apache/solr/cloud/AbstractBasicDistributedZkTestBase.java @@ -1115,16 +1115,6 @@ public static void createCollectionInOneInstance( } } - protected String getBaseUrl(SolrClient client) { - String url2 = - ((HttpSolrClient) client) - .getBaseURL() - .substring( - 0, - ((HttpSolrClient) client).getBaseURL().length() - DEFAULT_COLLECTION.length() - 1); - return url2; - } - @Override protected CollectionAdminResponse createCollection( Map> collectionInfos, diff --git a/solr/test-framework/src/java/org/apache/solr/cloud/AbstractFullDistribZkTestBase.java b/solr/test-framework/src/java/org/apache/solr/cloud/AbstractFullDistribZkTestBase.java index 8641c4a2fab..e7aafb2ded5 100644 --- a/solr/test-framework/src/java/org/apache/solr/cloud/AbstractFullDistribZkTestBase.java +++ b/solr/test-framework/src/java/org/apache/solr/cloud/AbstractFullDistribZkTestBase.java @@ -159,7 +159,10 @@ public static class CloudJettyRunner { public JettySolrRunner jetty; public String nodeName; public String coreNodeName; + + /** Core or Collection URL */ public String url; + public CloudSolrServerClient client; public ZkNodeProps info; diff --git a/solr/test-framework/src/java/org/apache/solr/cloud/api/collections/AbstractBackupRepositoryTest.java b/solr/test-framework/src/java/org/apache/solr/cloud/api/collections/AbstractBackupRepositoryTest.java index d988c5662bc..34b0d4a2b77 100644 --- a/solr/test-framework/src/java/org/apache/solr/cloud/api/collections/AbstractBackupRepositoryTest.java +++ b/solr/test-framework/src/java/org/apache/solr/cloud/api/collections/AbstractBackupRepositoryTest.java @@ -20,7 +20,6 @@ import static org.apache.lucene.codecs.CodecUtil.FOOTER_MAGIC; import static org.apache.lucene.codecs.CodecUtil.writeBEInt; import static org.apache.lucene.codecs.CodecUtil.writeBELong; -import static org.apache.solr.core.backup.repository.AbstractBackupRepository.PARAM_VERIFY_CHECKSUM; import static org.apache.solr.core.backup.repository.DelegatingBackupRepository.PARAM_DELEGATE_REPOSITORY_NAME; import java.io.File; diff --git a/solr/test-framework/src/java/org/apache/solr/embedded/JettySolrRunner.java b/solr/test-framework/src/java/org/apache/solr/embedded/JettySolrRunner.java index edb7694abf1..cbfb23ffed9 100644 --- a/solr/test-framework/src/java/org/apache/solr/embedded/JettySolrRunner.java +++ b/solr/test-framework/src/java/org/apache/solr/embedded/JettySolrRunner.java @@ -834,10 +834,7 @@ public void setProxyPort(int proxyPort) { this.proxyPort = proxyPort; } - /** - * Returns a base URL consisting of the protocol, host, and port for a Connector in use by the - * Jetty Server contained in this runner. - */ + /** Returns a base URL like {@code http://localhost:8983/solr} */ public URL getBaseUrl() { try { return new URL(protocol, host, jettyPort, "/solr"); diff --git a/solr/test-framework/src/java/org/apache/solr/util/SecurityJson.java b/solr/test-framework/src/java/org/apache/solr/util/SecurityJson.java new file mode 100644 index 00000000000..2b73b8fde19 --- /dev/null +++ b/solr/test-framework/src/java/org/apache/solr/util/SecurityJson.java @@ -0,0 +1,62 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.solr.util; + +import static java.util.Collections.singletonList; +import static java.util.Collections.singletonMap; +import static org.apache.solr.security.Sha256AuthenticationProvider.getSaltedHashedValue; + +import java.util.Map; +import org.apache.solr.common.util.Utils; +import org.apache.solr.security.BasicAuthPlugin; +import org.apache.solr.security.RuleBasedAuthorizationPlugin; + +/** + * Provides security.json constants for use in tests that enable security. + * + *

Many tests require a simple security.json with one admin user and the BasicAuthPlugin enabled; + * such a configuration is represented by the SIMPLE constant here. Other variants of security.json + * can be moved to this class if and when they are duplicated by two or more tests. + */ +public final class SecurityJson { + + private SecurityJson() {} + + public static final String USER = "solr"; + public static final String PASS = "SolrRocksAgain"; + public static final String USER_PASS = USER + ":" + PASS; + + public static final String SIMPLE = + Utils.toJSONString( + Map.of( + "authorization", + Map.of( + "class", + RuleBasedAuthorizationPlugin.class.getName(), + "user-role", + singletonMap(USER, "admin"), + "permissions", + singletonList(Map.of("name", "all", "role", "admin"))), + "authentication", + Map.of( + "class", + BasicAuthPlugin.class.getName(), + "blockUnknown", + true, + "credentials", + singletonMap(USER, getSaltedHashedValue(PASS))))); +} diff --git a/solr/webapp/web/js/angular/controllers/cloud.js b/solr/webapp/web/js/angular/controllers/cloud.js index 75eef23dae5..01e284b2001 100644 --- a/solr/webapp/web/js/angular/controllers/cloud.js +++ b/solr/webapp/web/js/angular/controllers/cloud.js @@ -155,7 +155,7 @@ var nodesSubController = function($scope, Collections, System, Metrics) { $scope.isFirstNodeForHost = function(node) { var hostName = node.split(":")[0]; var nodesInHost = $scope.filteredNodes.filter(function (node) { - return node.startsWith(hostName); + return node.split(":")[0] === hostName; }); return nodesInHost[0] === node; }; @@ -164,7 +164,7 @@ var nodesSubController = function($scope, Collections, System, Metrics) { $scope.firstLiveNodeForHost = function(key) { var hostName = key.split(":")[0]; var liveNodesInHost = $scope.filteredNodes.filter(function (key) { - return key.startsWith(hostName); + return key.split(":")[0] === hostName; }).filter(function (key) { return $scope.live_nodes.includes(key); }); @@ -329,7 +329,7 @@ var nodesSubController = function($scope, Collections, System, Metrics) { hostsToShow.push(hostName); if (isFiltered) { // Only show the nodes per host matching active filter nodesToShow = nodesToShow.concat(filteredNodes.filter(function (node) { - return node.startsWith(hostName); + return node.split(":")[0] === hostName; })); } else { nodesToShow = nodesToShow.concat(hosts[hostName]['nodes']); diff --git a/versions.lock b/versions.lock index 7f8b6572279..a9ae45802ae 100644 --- a/versions.lock +++ b/versions.lock @@ -78,7 +78,7 @@ com.sun.istack:istack-commons-runtime:3.0.12 (1 constraints: eb0d9a43) com.tdunning:t-digest:3.3 (1 constraints: aa04232c) com.zaxxer:SparseBitSet:1.2 (1 constraints: 0d081e75) commons-cli:commons-cli:1.7.0 (1 constraints: 0a050536) -commons-codec:commons-codec:1.16.1 (12 constraints: 44a7ebf4) +commons-codec:commons-codec:1.17.0 (12 constraints: 44a7edf4) commons-collections:commons-collections:3.2.2 (1 constraints: 09050236) commons-io:commons-io:2.15.1 (10 constraints: 4375f24a) de.l3s.boilerpipe:boilerpipe:1.1.0 (1 constraints: 590ce401) diff --git a/versions.props b/versions.props index 36875179686..aa9c20b7f61 100644 --- a/versions.props +++ b/versions.props @@ -18,7 +18,7 @@ com.jayway.jsonpath:json-path=2.9.0 com.lmax:disruptor=3.4.4 com.tdunning:t-digest=3.3 commons-cli:commons-cli=1.7.0 -commons-codec:commons-codec=1.16.1 +commons-codec:commons-codec=1.17.0 commons-collections:commons-collections=3.2.2 commons-io:commons-io=2.15.1 io.dropwizard.metrics:*=4.2.25