Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CNDB-11613 SAI compressed indexes #1474

Open
wants to merge 11 commits into
base: main
Choose a base branch
from
Open

Conversation

pkolaczk
Copy link

What is the issue

SAI Indexes consume too much storage space, sometimes.

What does this PR fix and why was it fixed

This PR allows to compress both the per-sstable and per-index components of SAI.
Use index_compression table param to control the per-sstable components compression.
Use compression property on the index to control the per-index components compression.

Checklist before you submit for review

  • Make sure there is a PR in the CNDB project updating the Converged Cassandra version
  • Use NoSpamLogger for log lines that may appear frequently in the logs
  • Verify test results on Butler
  • Test coverage for new/modified code is > 80%
  • Proper code formatting
  • Proper title for each commit staring with the project-issue number, like CNDB-1234
  • Each commit has a meaningful description
  • Each commit is not very long and contains related changes
  • Renames, moves and reformatting are in distinct commits

@pkolaczk pkolaczk force-pushed the c11613-sai-compression branch from b6c7de2 to cc3605e Compare January 20, 2025 15:43
@pkolaczk pkolaczk changed the title C11613 sai compression CNDB-11613 SAI compressed indexes Jan 21, 2025
@pkolaczk pkolaczk force-pushed the c11613-sai-compression branch 2 times, most recently from 6969bb1 to 5415fd8 Compare January 27, 2025 16:27
maoling and others added 10 commits January 30, 2025 09:32
What was ported:
- current compaction throughput measurement by CompactionManager
- exposing current compaction throughput in StorageService
  and CompactionMetrics
- nodetool getcompactionthroughput, including tests

Not ported:
- changes to `nodetool compactionstats`, because that would
  require porting also the tests which are currently missing in CC
  and porting those tests turned out to be a complex task without
  porting the other changes in the CompactionManager API
- Code for getting / setting compaction throughput as double
This commit introduces a new AdaptiveCompressor class.

AdaptiveCompressor uses ZStandard compression with a dynamic
compression level based on the current write load. AdaptiveCompressor's
goal is to provide similar write performance as LZ4Compressor
for write heavy workloads, but a significantly better compression ratio
for databases with a moderate amount of writes or on systems
with a lot of spare CPU power.

If the memtable flush queue builds up, and it turns out the compression
is a significant bottleneck, then the compression level used for
flushing is decreased to gain speed. Similarly, when pending
compaction tasks build up, then the compression level used
for compaction is decreased.

In order to enable adaptive compression:
  - set `-Dcassandra.default_sstable_compression=adaptive` JVM option
    to automatically select `AdaptiveCompressor` as the main compressor
    for flushes and new tables, if not overriden by specific options in
    cassandra.yaml or table schema
  - set `flush_compression: adaptive` in cassandra.yaml to enable it
    for flushing
  - set `AdaptiveCompressor` in Table options to enable it
    for compaction

Caution: this feature is not turned on by default because it
may impact read speed negatively in some rare cases.

Fixes riptano/cndb#11532
Reduces some overhead of setting up / tearing down those
contexts that happened inside the calls to Zstd.compress
/ Zstd.decompress. Makes a difference with very small chunks.

Additionally, added some compression/decompression
rate metrics.
Index compression options have been split into
key_compression and value_compression so you can write:

CREATE INDEX ON tab(v)
  WITH key_compression = { 'class': 'LZ4Compressor' }
  AND value_compression = { 'class': 'LZ4Compressor' }
@pkolaczk pkolaczk force-pushed the c11613-sai-compression branch from 5415fd8 to 18e8b04 Compare January 30, 2025 18:26
@pkolaczk pkolaczk force-pushed the c11613-sai-compression branch from 18e8b04 to 71a3a1e Compare January 31, 2025 07:51
@cassci-bot
Copy link

❌ Build ds-cassandra-pr-gate/PR-1474 rejected by Butler


1 new test failure(s) in 8 builds
See build details here


Found 1 new test failures

Test Explanation Branch history Upstream history
...thRestartTest.testReadingValuesOfDroppedColumns regression 🔴🔵🔵🔵🔵🔵🔵 🔵🔵🔵🔵🔵🔵🔵

Found 239 known test failures

@@ -347,6 +347,9 @@ public enum CassandraRelevantProperties
/** Watcher used when opening sstables to discover extra components, eg. archive component */
CUSTOM_SSTABLE_WATCHER("cassandra.custom_sstable_watcher"),

/** When enabled, a user can set compression options in the index schema */
INDEX_COMPRESSION("cassandra.index.compression_enabled", "false"),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Being this a DS-only property, should we use a different prefix, as in ds.index.compression_enabled, so it's easier for us to identify these properties?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, the property could be named INDEX_COMPRESSION_ENABLED, or perhaps USE_INDEX_COMPRESSION, so the name suggests that it's a boolean property.

/**
public static boolean shouldUseAdaptiveCompressionByDefault()
{
return System.getProperty("cassandra.default_sstable_compression", "fast").equals("adaptive");

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would probably be better in CassandraRelevantProperties.

Comment on lines +244 to +245
* Builds a `WITH option1 = ... AND option2 = ... AND option3 = ... clause
* @param builder a receiver to receive a builder allowing to add each option

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* Builds a `WITH option1 = ... AND option2 = ... AND option3 = ... clause
* @param builder a receiver to receive a builder allowing to add each option
* Builds a {@code WITH option1 = ... AND option2 = ... AND option3 = ...} clause.
*
* @param builder a consumer to receive a builder allowing to add each option


public static class OptionsBuilder
{
private CqlBuilder builder;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can be final

public static class OptionsBuilder
{
private CqlBuilder builder;
boolean empty = true;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can be private

* May not modify this object.
* Should return null if the request cannot be satisfied.
*/
default ICompressor forUse(Uses use)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: add @Nullable

.row(table.name, index.name)
.add("kind", index.kind.toString())
.add("options", index.options);

if (CassandraRelevantProperties.INDEX_COMPRESSION.getBoolean())

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should add a note here or in CassandraRelevantProperties.INDEX_COMPRESSION about how enabling index compression can be problematic for downgrades?

Comment on lines +1 to +17
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need the header with Copyright DataStax, Inc. instead of the ASF header.

}

@Test
public void testKeyCompression()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be great to have a test where we create an index with a certain key compression and then a second index with a different key compression.

@@ -194,7 +210,10 @@ public Keyspaces apply(Keyspaces schema)
throw ire("Index %s is a duplicate of existing index %s", index.name, equalIndex.name);
}

TableMetadata newTable = table.withSwapped(table.indexes.with(index));
// All indexes on one table must use the same key_compression.
// The newly created index forces key_compression on the previous indexes.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to emit a client warning about this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants