-
Notifications
You must be signed in to change notification settings - Fork 289
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate By-Ref Vector and Matrix Operators in C# 8.0 #600
Comments
Also not sure if the current arithmetric implementation is the most optimal one. There is quite a big difference in the IL being generated. Take for example the vector3 + operator: Source:
IL:
Assembly:
And compare it to this: Source:
IL:
Assembly:
BenchmarkDotnet also shows that creating a new vector is alot faster than modifying the vector. Creating a small test project in duality with a stopwatch also confirms this. Might be worth looking in to and if you want to use the in keyword (readonly ref, C# 7.2) you would have to do this anyway. |
Can you provide the C# code for each of the IL code snippets? To be really sure how this ends up on the CPU, we'll also need to compare the JITted assembly code, since a lot of optimizations only happen in the JIT stage.
What do you mean by creating a new vector vs. modifying..? Using the |
Basically its this: The current version (vector is the vector you get through the parameters)
vs this:
The last one is also compatible with the in keyword as you do not modify the vector you get in through the parameters (as in you can add the in keyword to the parameter without having to change code). It also seems that atleast in IL its alot shorter and measuring it shows its faster. Haven't checked Jitted code. EDIT: added the source to my previous post |
Ah, sure. For the |
Might be better memory locality since it doesnt have to go back to that vector between the calculations? Idk really I just noticed this when I looked at the IL and decided to measure it. Could be a good idea to try to reproduce it to see if that difference is there on your machine as well? |
Good call. Not sure I'll get around to benchmark this myself, but maybe you can just post the C# file with your benchmarking code and the full BenchmarkDotNet results for reference? Should be enough of a statement for when we get back to this issue later. |
Okay, that really looks like there's an obvious perf gain from restructuring the operators that way. Very nice find 👍 If you're up for it, this could be a worthwhile |
Are you 100% sure that you can still multiply both ways, e.g. |
I didnt removed them I changed the reversed variant to use the logic of the non reversed variant in order to reduce code duplication. |
Ah, makes sense. If this is at the cost of performance, I would value performance over avoiding code duplication in this case, otherwise good call. |
No I measured it and the performance is the same. It seems to inline them so there is no difference in performance. |
Now my PR is merged aside from the much better performance when using these operators adding the in keyword is really trivial now and will require no other code changes:
New:
This issue can be done as soon as we switch to C#7.2. Do remember to benchmark these to see if it improves performance. There might be a chance it actually reduces performance for smaller vectors! |
There are cases where defensive copies might occur when using the https://blogs.msdn.microsoft.com/seteplia/2018/03/07/the-in-modifier-and-the-readonly-structs-in-c/ However since the |
As a further enhancement one can add readonly to properties and methods etc now in C#8.0 to denote they do not change state: https://docs.microsoft.com/en-us/dotnet/csharp/whats-new/csharp-8#readonly-members (in C# 7.3 this was only possible on struct level) This will enable the compiler to skip more defensive copies. |
Now duality is using C# 7.3 this issue can get picked up at any time:
|
Making vector or matrix types readonly would be quite a usability downgrade, since that would prevent field assignments. We can still avoid performance regressions though, if we use the new With that in mind, I think it makes sense to defer this until we're done with #698 right up to C# 8.0. |
Agreed, one could also consider adding methods like So this: var vector = new Vector3(1,2,3);
vector.X = 6;
vector.Z = 6;
// Do something with vector Changes to: var vector = new Vector3(1,2,3);
var modifiedVector = vector.WithX(6)
.WithZ(6);
// Do something with modifiedVector Slightly different meaning though but this does allow you to mark the entire vector as readonly. |
I acknowledge the technical possibility, but that just reads terrible and wrong 😄 (Also, ref / out support, additional copies and potentially other stuff I'm not even thinking about - just give me plain old data vectors) |
Hmm I don't think its that bad though. However another reason not to implement my suggestion (yup I now disagree with myself :P) might be this: https://referencesource.microsoft.com/#System.Numerics/System/Numerics/Vector3_Intrinsics.cs,3e75f0b7820a3bf5 There might be a chance that we would want to switch to these vectors at some point in the future since the System Numerics Vectors are alot faster due to stuff like SIMD optimizations. Better to try and match that public API if possible. |
Yeah, I was eyeballing that possibility as well - definitely something to keep in mind. |
Summary
As mentioned in issue #598, the new
in
parameter keyword is allowed on operators and using literal values, so operator implementations can supersede the previous static by-ref operator alternatives.Analysis
in
keyword implicitly with non-referencable values (like literals or property return values) has no negative impact on performance. Read the docs, and if that's not 100% clear, check out generated IL and x86 code in a sample project.ref
methods fromVector2/3/4
,Quaternion
andMatrix3/4
if they have an operator equivalent.in
keyword on parameters provided to the operators in question.The text was updated successfully, but these errors were encountered: