Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[feat][storage] Add SpanKind support for badger #6376

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

Manik2708
Copy link
Contributor

Which problem is this PR solving?

Description of the changes

  • Queries with span kind will now be supported for Badger

How was this change tested?

  • Writing unit tests

Checklist

@Manik2708 Manik2708 requested a review from a team as a code owner December 17, 2024 07:43
@Manik2708 Manik2708 requested a review from jkowall December 17, 2024 07:43
@dosubot dosubot bot added enhancement storage/badger Issues related to badger storage labels Dec 17, 2024
@Manik2708
Copy link
Contributor Author

Manik2708 commented Dec 17, 2024

I have changed the structure of cache which is leading to these concerns:

  1. Will a 3D map be a viable option for production?
  2. Cache will never be able to retrieve operations of old data! When kind is not sent by the user, all operations related to new data will be sent. I have a probable solution for this! We might have to introduce boolean which when true will load the cache from old data (old index key) and mark all the span of kind UNSPECIFIED
  3. To maintain consistency, we must take the service name from the newly created index, but extracting service name from serviceName+operationName+kind is the challenge! The solution which I have thought is reserving the last 7 places for len(serviceName)+len(operationName)+kind in the new index. This has an issue that we have to limit the length of serviceName and operationName to 999. This way we can get rid of the c.services map also. Removing this map is optional and a matter of discussion because for this we have to decide between storage and iteration, removing this map will lead to extra iterations in GetServices, I also thought of a solution for this:
data = map[string]struct
// Here this struct can be defined as
type struct {
expiryTime uint64
operations map[trace.SpanKind]map[string]uint64
}

Once the correct approach is discussed I will handle some more edge cases and make the e2e tests pass (making GetOperationsMissingSpanKind: false!

Copy link

codecov bot commented Dec 17, 2024

Codecov Report

Attention: Patch coverage is 95.85492% with 8 lines in your changes missing coverage. Please review.

Project coverage is 96.20%. Comparing base (b689a86) to head (3ba24a9).

Files with missing lines Patch % Lines
plugin/storage/badger/spanstore/reader.go 93.47% 4 Missing and 2 partials ⚠️
plugin/storage/badger/spanstore/kind.go 89.47% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #6376      +/-   ##
==========================================
- Coverage   96.24%   96.20%   -0.05%     
==========================================
  Files         375      376       +1     
  Lines       21397    21530     +133     
==========================================
+ Hits        20593    20712     +119     
- Misses        612      623      +11     
- Partials      192      195       +3     
Flag Coverage Δ
badger_v1 11.12% <51.81%> (+0.48%) ⬆️
badger_v2 2.75% <0.00%> (-0.04%) ⬇️
cassandra-4.x-v1-manual 16.44% <0.00%> (-0.21%) ⬇️
cassandra-4.x-v2-auto 2.68% <0.00%> (-0.04%) ⬇️
cassandra-4.x-v2-manual 2.68% <0.00%> (-0.04%) ⬇️
cassandra-5.x-v1-manual 16.44% <0.00%> (-0.21%) ⬇️
cassandra-5.x-v2-auto 2.68% <0.00%> (-0.04%) ⬇️
cassandra-5.x-v2-manual 2.68% <0.00%> (-0.04%) ⬇️
elasticsearch-6.x-v1 20.19% <0.00%> (-0.26%) ⬇️
elasticsearch-7.x-v1 20.25% <0.00%> (-0.27%) ⬇️
elasticsearch-8.x-v1 20.42% <0.00%> (-0.25%) ⬇️
elasticsearch-8.x-v2 2.74% <0.00%> (-0.04%) ⬇️
grpc_v1 12.05% <0.00%> (-0.15%) ⬇️
grpc_v2 8.93% <0.00%> (-0.12%) ⬇️
kafka-3.x-v1 10.23% <0.00%> (-0.13%) ⬇️
kafka-3.x-v2 2.75% <0.00%> (-0.04%) ⬇️
memory_v2 2.74% <0.00%> (-0.05%) ⬇️
opensearch-1.x-v1 20.30% <0.00%> (-0.27%) ⬇️
opensearch-2.x-v1 20.30% <0.00%> (-0.26%) ⬇️
opensearch-2.x-v2 2.74% <0.00%> (-0.05%) ⬇️
tailsampling-processor 0.51% <0.00%> (-0.01%) ⬇️
unittests 95.06% <95.85%> (-0.04%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@Manik2708
Copy link
Contributor Author

I have changed the structure of cache which is leading to these concerns:

  1. Will a 3D map be a viable option for production?
  2. Cache will never be able to retrieve operations of old data! When kind is not sent by the user, all operations related to new data will be sent. I have a probable solution for this! We might have to introduce boolean which when true will load the cache from old data (old index key) and mark all the span of kind UNSPECIFIED
  3. To maintain consistency, we must take the service name from the newly created index, but extracting service name from serviceName+operationName+kind is the challenge! The solution which I have thought is reserving the last 7 places for len(serviceName)+len(operationName)+kind in the new index. This has an issue that we have to limit the length of serviceName and operationName to 999. This way we can get rid of the c.services map also. Removing this map is optional and a matter of discussion because for this we have to decide between storage and iteration, removing this map will lead to extra iterations in GetServices, I also thought of a solution for this:
data = map[string]struct
// Here this struct can be defined as
type struct {
expiryTime uint64
operations map[trace.SpanKind]map[string]uint64
}

Once the correct approach is discussed I will handle some more edge cases and make the e2e tests pass (making GetOperationsMissingSpanKind: false!

@yurishkuro Please review the approach and problems!

@Manik2708
Copy link
Contributor Author

@yurishkuro I have added more changes which reduces the iterations in prefill to 1 but it limits the serviceName to length of 999. Please review!

@Manik2708
Copy link
Contributor Author

Manik2708 commented Dec 19, 2024

I have an idea for old data without using the migration script! We can store the old data in two other data structures in cache (without kind). But then the only question which rises then: What to return when no span kind is given by user? Operations of new data of all kind or operations of old data (kind marked as unspecified) or an addition of both?

@yurishkuro yurishkuro added the changelog:new-feature Change that should be called out as new feature in CHANGELOG label Dec 20, 2024
model/span.go Outdated Show resolved Hide resolved
model/span.go Outdated Show resolved Hide resolved
model/span.go Outdated Show resolved Hide resolved
@yurishkuro
Copy link
Member

What to return when no span kind is given by user?

then we should return all operations regardless of the span kind

@Manik2708
Copy link
Contributor Author

What to return when no span kind is given by user?

then we should return all operations regardless of the span kind

That means including all spans of old data also (Whose kind is not there in cache)?

@Manik2708 Manik2708 marked this pull request as draft December 22, 2024 14:04
@Manik2708 Manik2708 marked this pull request as ready for review December 22, 2024 19:16
@dosubot dosubot bot added the area/storage label Dec 22, 2024
@Manik2708
Copy link
Contributor Author

My current approach is leading to errors in unit test of factory_test.go. The badger is throwing this error infinetly times:

runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1700, retrying
badger 2024/12/23 01:12:11 ERROR: error flushing memtable to disk: error while creating table err: while creating table: /tmp/badger116881967/000002.sst error: open /tmp/badger116881967/000002.sst: no such file or directory
unable to open: /tmp/badger116881967/000002.sst
github.com/dgraph-io/ristretto/v2/z.OpenMmapFile

This is probably because f.Close is closed before the completion of prefill. That implies creation of new index for old data is slow. Hence I think we have only one way, if we want to skip even auto migration and that is using this function:

func getSpanKind(txn *badger.Txn, service string, timestampAndTraceId string) model.SpanKind {
	for i := 0; i < 6; i++ {
		value := service + model.SpanKindKey + model.SpanKind(i).String()
		valueBytes := []byte(value)
		operationKey := make([]byte, 1+len(valueBytes)+8+sizeOfTraceID)
		operationKey[0] = tagIndexKey
		copy(operationKey[1:], valueBytes)
		copy(operationKey[1+len(valueBytes):], timestampAndTraceId)
		_, err := txn.Get(operationKey)
		if err == nil {
			return model.SpanKind(i)
		}
	}
	return model.SpanKindUnspecified
}

The only problem is that, during prefilling 6*NumberOfOperations Get Queries will be called. Please review this approach @yurishkuro and I think we need to discuss about autoCreation of new index or should we skip the creation of any new index and use the function given above?

@Manik2708 Manik2708 requested a review from yurishkuro December 23, 2024 19:28
@Manik2708 Manik2708 marked this pull request as draft December 26, 2024 02:07
@Manik2708 Manik2708 marked this pull request as ready for review December 26, 2024 05:22
@Manik2708
Copy link
Contributor Author

@yurishkuro I finally got rid of migration and now I think its ready for review! Please ignore my previous comments. The current commit has no linkage them!

@Manik2708 Manik2708 requested a review from yurishkuro December 26, 2024 16:35
@Manik2708
Copy link
Contributor Author

make test is passing locally, should we rerun in CI?

Copy link
Member

@yurishkuro yurishkuro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you revisit the tests by using API methods of the cache instead of manually manipulating its internal data structures? Tests should be validating expected behavior that the user of the cache expects. The only time it's acceptable to go into internal details is when some error conditions cannot be tested otherwise purely from external API.

@Manik2708
Copy link
Contributor Author

can you revisit the tests by using API methods of the cache instead of manually manipulating its internal data structures? Tests should be validating expected behavior that the user of the cache expects. The only time it's acceptable to go into internal details is when some error conditions cannot be tested otherwise purely from external API.

I have fixed all the tests except that of Update and Prefill, even they are also not manipulating the data structure, they are used just to check whether cache is storing by using the update or prefill

@Manik2708 Manik2708 requested a review from yurishkuro December 30, 2024 08:19
@Manik2708
Copy link
Contributor Author

@yurishkuro Can you please review?

@Manik2708 Manik2708 marked this pull request as draft January 2, 2025 09:31
@Manik2708 Manik2708 marked this pull request as ready for review January 2, 2025 16:09
@Manik2708 Manik2708 requested a review from yurishkuro January 2, 2025 16:12
@yurishkuro
Copy link
Member

Q: do we have to maintain two indices forever, or is this only a side-effect of having to be backwards compatible with the existing data?

For example, one way I could see this working is:

  • we only write the new index with kind
  • when reading, we do a dual lookup, first in the new index then in the old (if the old exists)
  • we have a config option to turn off the dual-reading behavior. The motivation here is that people rarely keep tracing data for very long, so in 4 months (4 releases) the old index is likely going to be TTLed out anyway.
    • In the first release of the feature this option could be defaulted to ON
    • Then a couple releases down the road we can default it to OFF
    • Then 2 more releases down the road we deprecate the option and remove the old index reading code.

@Manik2708
Copy link
Contributor Author

Q: do we have to maintain two indices forever, or is this only a side-effect of having to be backwards compatible with the existing data?

For example, one way I could see this working is:

  • we only write the new index with kind

  • when reading, we do a dual lookup, first in the new index then in the old (if the old exists)

  • we have a config option to turn off the dual-reading behavior. The motivation here is that people rarely keep tracing data for very long, so in 4 months (4 releases) the old index is likely going to be TTLed out anyway.

    • In the first release of the feature this option could be defaulted to ON
    • Then a couple releases down the road we can default it to OFF
    • Then 2 more releases down the road we deprecate the option and remove the old index reading code.

The key : serviceName+Kind+OperationName+Time+TraceId can't be used in the reader to find trace ids. Because while finding trace ids we might not be aware of the kind. We can avoid dual lookups while prefilling by your suggested roadmap. This key schema was also discussed in the issue and it was asked in the comment #1922 (comment). If we want to use this key schema permanently then employ a different key: serviceName+OperationName+kind+Time+TraceId while scanning the indexes we have to create this key from service and operation. So when a TraceQueryParameter with only service name and operation name is there while scanning we have to append 6 keys so as to fetch all trace ids. Please have a look at this

serviceName = "service"
operationName = "operation"
//So in the scanning we have to create the following //6 keys:
key1 = "serviceoperation0"
key2 = "serviceoperation1"
...

Then finding the trace ids would also work fine. So either we have to create an extra index or do this extra scanning!

@yurishkuro
Copy link
Member

yurishkuro commented Jan 2, 2025

serviceName+OperationName+kind+Time+TraceId

This index doesn't make sense to me. It cannot effectively support a query that only includes service+operation, you must always know the kind to get to the desired time range.

Wouldn't it make more sense to append the kind after the Time? Then we have the following two queries:

  1. user does not specify kind - we scan everything within the given time range
  2. user does specify kind - we still scan everything within the given time range and discard entries with the wrong kind. As you mentioned earlier, the probability of having different kinds for the same service+operation is quite low, so even if it does happen, in the worse case we'd have to scan 5x more entries (kind can have 5 different values), but that worse case will almost never happen because in most cases it will be exactly 1 value.

@Manik2708
Copy link
Contributor Author

serviceName+OperationName+kind+Time+TraceId

This index doesn't make sense to me. It cannot effectively support a query that only includes service+operation, you must always know the kind to get to the desired time range.

Wouldn't it make more sense to append the kind after the Time? Then we have the following two queries:

  1. user does not specify kind - we scan everything within the given time range
  2. user does specify kind - we still scan everything within the given time range and discard entries with the wrong kind. As you mentioned earlier, the probability of having different kinds for the same service+operation is quite low, so even if it does happen, in the worse case we'd have to scan 5x more entries (kind can have 5 different values), but that worse case will almost never happen because in most cases it will be exactly 1 value.

We can try this but then we need to remember that it will break these conventions:

  1. Last 16 bytes of trace is trace id
  2. Then 8 bytes of time stamp
    Only this key will be breaking these. Also this key need not to be present when tags are there. So we need prepare two seperate logics of scanning and parsing.

@yurishkuro
Copy link
Member

Why is it "breaking" if kind introduced after Time but not "breaking" when it's before Time?

Whatever we do the changes must be backwards compatible.

@Manik2708
Copy link
Contributor Author

Manik2708 commented Jan 2, 2025

Why is it "breaking" if kind introduced after Time but not "breaking" when it's before Time?

Whatever we do the changes must be backwards compatible.

Please see this:

func createIndexKey(indexPrefixKey byte, value []byte, startTime uint64, traceID model.TraceID) []byte {
// KEY: indexKey<indexValue><startTime><traceId> (traceId is last 16 bytes of the key)
key := make([]byte, 1+len(value)+8+sizeOfTraceID)
key[0] = (indexPrefixKey & indexKeyRange) | spanKeyPrefix
pos := len(value) + 1
copy(key[1:pos], value)
binary.BigEndian.PutUint64(key[pos:], startTime)
pos += 8 // sizeOfTraceID / 2
binary.BigEndian.PutUint64(key[pos:], traceID.High)
pos += 8 // sizeOfTraceID / 2
binary.BigEndian.PutUint64(key[pos:], traceID.Low)
return key

This is how we are creating a key, when service+operation+kind is used it is used as value here but appending it after time will break this.

@yurishkuro
Copy link
Member

why does it matter? We're creating an index with a different layout, we don't have to be restricted by how that specific function is implemented, especially since we are introducing a different look up process (it seems all other indices are doing direct lookup by the prefix up to the timestamp and then scan / parse).

@Manik2708
Copy link
Contributor Author

why does it matter? We're creating an index with a different layout, we don't have to be restricted by how that specific function is implemented, especially since we are introducing a different look up process (it seems all other indices are doing direct lookup by the prefix up to the timestamp and then scan / parse).

Ok, will give it a try and get back to you! Thanks for your time!

@Manik2708
Copy link
Contributor Author

@yurishkuro I have tried to take care of all the edge cases, please review!

@Manik2708
Copy link
Contributor Author

@yurishkuro This PR is ready to review, I have added dual lookups and backward compatibility tests in this PR.

plugin/storage/badger/config.go Outdated Show resolved Hide resolved
plugin/storage/badger/config.go Outdated Show resolved Hide resolved
}
err := writer.writeSpanWithOldIndex(&oldSpan)
require.NoError(t, err)
traces, err := reader.FindTraces(context.Background(), &spanstore.TraceQueryParameters{
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure I follow this test. What does FindTraces have to do with span kind in the operations retrieval? Also, backwards compatibility test only makes sense when it is executed against old and new code.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have changed the key but we need to make sure that traces are also fetched from old key when dual lookup is turned on. Please stress on a fact that operation key is used in getting traces also along with filling in cache, If you will look at this code, we are first writing span with old key and then testing whether it is able to fetch traces associated with that key (please see L42)

}
*/
// The uint64 value is the expiry time of operation
operations map[string]map[model.SpanKind]map[string]uint64
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to clarify, CacheStore is used to avoid expensive scans when loading services and operations, correct? In other words, it's all in-memory structure. In this case, why can we not change just the value of the map to be a combo {kind, expiration} instead of changing the structure? When loading, scanning everything for a give service is still going to be negligible amount of data.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't understand this! Are you saying to keep these structures?

services map[string]uint64 // Already in the cache
operations map[string][string]kind
type kind struct {
    kind SpanKind
   expiry uint64
}

If yes, then how to handle when query is to fetch all operations for a service and kind? Should we iterate all operations and skip those operations which are not of the required kind? (We are using a similar approach currently, i.e iteralting for all kinds and skipping unrequired kinds but this was justified because max kinds can be 6 but number of operations aren't defined, so will this option viable?)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this structure.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So iterating all operations and skipping not required kinds will be right?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While approaching towards this, I am leading to a conclusion that this approach will lead to the same problem that spans with same operation and service name but different kind will end up in overriding of data. So I don't think that this structure is going to be a correct approach! Rather I could think of only 3D map a viable option. So should we move forward with 3D map or can we have a better idea?

plugin/storage/badger/spanstore/cache.go Outdated Show resolved Hide resolved
plugin/storage/badger/spanstore/kind.go Outdated Show resolved Hide resolved
plugin/storage/badger/spanstore/writer.go Outdated Show resolved Hide resolved
plugin/storage/badger/spanstore/writer.go Outdated Show resolved Hide resolved
plugin/storage/badger/spanstore/writer.go Outdated Show resolved Hide resolved
plugin/storage/badger/spanstore/writer.go Outdated Show resolved Hide resolved
@Manik2708 Manik2708 changed the title SpanKind support for badger [feat][storage] Add SpanKind support for badger Jan 21, 2025
yurishkuro pushed a commit that referenced this pull request Jan 21, 2025
…er (#6575)

## Which problem is this PR solving?
Comment:
#6376 (comment)

## Description of the changes
- Cache was directly contacting the db to prefill itself which is not a
good way, now this responsibility is given to reader to read from badger
and fill the cache.

## How was this change tested?
- Unit and e2e tests

## Checklist
- [x] I have read
https://github.com/jaegertracing/jaeger/blob/master/CONTRIBUTING_GUIDELINES.md
- [x] I have signed all commits
- [x] I have added unit tests for the new functionality
- [x] I have run lint and test steps successfully
  - for `jaeger`: `make lint test`
  - for `jaeger-ui`: `npm run lint` and `npm run test`

---------

Signed-off-by: Manik2708 <[email protected]>
@Manik2708 Manik2708 marked this pull request as draft January 22, 2025 05:18
Signed-off-by: Manik2708 <[email protected]>
@Manik2708 Manik2708 marked this pull request as ready for review January 22, 2025 09:05
featuregate.StageBeta, // enabed by default
featuregate.WithRegisterFromVersion("v2.2.0"),
featuregate.WithRegisterToVersion("v2.5.0"),
featuregate.WithRegisterDescription("Allows reader to look up for traces from old index key"),
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have taken the version values from the PR, currently unclear about these versions. Secondly should I link the PR to this gate or issue? As issue is not directly talking about dual-lookup


store *badger.DB
Copy link
Contributor Author

@Manik2708 Manik2708 Jan 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I couldn't find any use of store in cache now because it is dependent on reader to prefill itself.

@Manik2708 Manik2708 requested a review from yurishkuro January 22, 2025 09:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/storage changelog:new-feature Change that should be called out as new feature in CHANGELOG enhancement storage/badger Issues related to badger storage
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Badger storage plugin: query service to support spanKind when retrieve operations for a given service.
2 participants