Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

multiple 3d clips playing via timeline inquiry #12

Open
PockPocket opened this issue Apr 25, 2024 · 7 comments
Open

multiple 3d clips playing via timeline inquiry #12

PockPocket opened this issue Apr 25, 2024 · 7 comments

Comments

@PockPocket
Copy link

Hi Marek,

I hope you are doing great.

I've been playing a lot with the project in the past year, making compositions out of multiple 3d clips.

Playing and working with multiple clips via the timeline works well, only thing is that there is a small fps drop (around 10fps) every time a new 3d clip is activated.

I was wondering if you'd knew of any way's of making successive 3dclips load in the scene without causing the jitter that comes with the fps drop ?

Sending good vibes your way,
Mathieu

@marek-simonik
Copy link
Owner

Hi Mathieu,

I'm not sure what causes the FPS drop, but my first guess is that it might be related to the initialization of VFX Graph for each newly added/activated clip.

If displaying only the point cloud without any effects is sufficient for your purposes, then you might want to test whether decreasing the number of particles of the VFX Graph effect mitigates the FPS drop; here is a relevant guide that explains how to reduce the amount of particles: #8 (comment)

In every case, I recommend to test if reducing the number of spawned particles of the VFX Graph has any effect on the FPS stability.

All the best,
Marek

@PockPocket
Copy link
Author

Hi Marek,

I hope you’re well!

I've been diving back into the project, having explored Depthkit and its plugin, which allowed me to play 3D videos captured with the Kinect Azure at a stable 30fps in VR, with VFX Graph at 5 million points. Although that workflow was promising, I found the recordings not as smooth and that filming with my iPhone offers better portability and practicality, as it doesn't require a PC connection during capture.

So, I'm back at it, still looking for a way to address the FPS limitations. Currently, my R3D clips max out at 21-22fps when played in play mode, through the timeline. I’ve tried reducing the point count, but unfortunately, that hasn’t improved the frame rate. Are you also experiencing this 21-22fps limit on your end?

Additionally, I haven’t yet figured out how to eliminate the lag when loading successive R3D clips from the timeline. There’s a noticeable FPS drop and lag each time a new R3D file is loaded into the scene. I’d appreciate any advice on how to maintain smoother transitions or improve FPS stability, especially for previously filmed R3D files imported into the asset folder.

Thanks a lot for your insights, and sending good vibes your way!

Best,
Mathieu

image

image

@marek-simonik
Copy link
Owner

Hi Mathieu,

do you use the the simple Record3D_Simple_Streaming VFX graph or do you use the VFX graph effect created by Keijiro Takahashi (the Particles object)? If you use the simple graph, then it should be easier to get more performance.

Currently, I don't have access to a computer with Unity, so I cannot test by myself, but based on the second screenshot you shared, your VFX graph seems to be different than what I recommended in #8 (comment)

In particular, your VFX graph sets lifetime for the particles in the "Initialize Particles" section (so that the particles repeatedly die and get respawned, which affects performance) and it also appears as if you are not updating the Position and Color attributes of the already spawned particles in the "Update Particles" section of the VFX graph. The number of spawned particles in the "Spawn" section is also very high; try to manually lower to e.g. 1 000 000 or to an even lower number.

Please try to replicate the VFX graph showed in #8 (comment) — in particular:

  • Uncheck "Set Lifetime Random (Uniform)", so that particles don't dies (therefore the same particle's Position and Color attributes can be updated in each frame instead of creating a new stream of particles in each frame).
  • Make sure that in the "Update Particle" section, you update the Position and Color attributes of each particle from the textures (as is done in the "Initialize Particle" section).
  • See what happens when you lower the particle count in the "Spawn" section.

If there would still be performance issues, try lowering the number of spawned and/or initialized particles (in the "Spawn" and "Initialize Particle" sections) to absurdly low values (e.g. to 1 000 particles). If the performance would be as bad as before, then the bottleneck might be somewhere else.

@PockPocket
Copy link
Author

PockPocket commented Nov 11, 2024

I've replicated the graph and tried to put really low value to spawn and initialized particles (less than 1000) and the 20 fps bottleneck was still there. also the colors of the texture is now flashing red, blue, yellow repeatedly, which wasn't happening using my previous graph 🤔

any other idea on what could cause this bottleneck ?

I've tried different new projects with .r3d and vfx graph and it is alway's present in all of them.

thanks for your help, it is really appreciated!

image

@marek-simonik
Copy link
Owner

I still cannot try it in Unity myself, so let me at least ask a few questions:

  1. What happens if you set just 1 spawn and initialize particle? Will you still get only 20 FPS?
  2. How many clips do you have in the Timeline? Do you observe 20 FPS even with a single video in the Timeline?
  3. Are you trying to replay LiDAR videos which were recorded with the "Higher-quality LiDAR recording" option enabled in the Settings section of Record3D? Such videos will be saved with 1920x1440 px RGB images. Perhaps that might be the bottleneck (i.e. slower loading and decoding of the high-resolution JPG images)?
  4. Do you see 20 FPS even with 3D videos recorded via the selfie FaceID camera?

@PockPocket
Copy link
Author

PockPocket commented Nov 16, 2024

  1. still 20 fps

  2. one clip is active in the timeline

  3. I was recording with "High-quality LiDAR recording" option enabled. I did a test with the option disabled and the playback was still 20 fps when in play mode.

  4. 3d videos recorded with selfie FaceID camera plays at 70 fps but there's a bump every 3 seconds where the fps drops below 30 fps . this is also present when values for spawn and initialized is set to one in the graph you recommended me to use. out of curiosity I also tried it with my previous graph and the result was exactly the same with the fps and bumps. any idea on what is causing this bump ?

update 4. I brought this convo to chat gpt and it suggested me to verify if "Use IncrementaGC" was enabled in the Player settings. enabling it kinda solved the bumps (made them smoother at least) ! I can play the clip with either one of the graphs, with 4M active points playing at 45-55 fps (it jumps from 45 to 55 every 1.5 sec aprox, but overall visualization is smooth) on my RTX2060 gaming laptop.

chatgpt convo: https://chatgpt.com/share/673b7c47-54ec-8011-b858-7aa854478e63

  • I've only enabled incremental GC. didn't found any settings related to "Incremental Garbage Collection", not sure if it exists.

update 3. Tested this after "Incremental GC" was enabled - I get up to 45 fps for 2M points with LiDAR at low quality.
high quality recordings playback stays at 20fps even with spawn and init. particles set at 1 - I wonder what's causing this bottleneck.

thanks a lot for your precious help.

cheers,

Mathieu

@marek-simonik
Copy link
Owner

Apologies for the delay.

That's an interesting update on point 4! I wouldn't be able to suggest this as I don't have as much experience with Unity.

TL;DR: I think that the bottleneck might not be caused by the GPU, but by the CPU, and it might originate from the DecompressFrame() method of the record3d_unity_playback library. Measure the execution time of DecompressFrame() in the C# code for different types of Record3D videos to conclude whether it is really the bottleneck.

If varying the amount of VFX graph particles does not influence the playback speed, then the bottleneck seems to be caused by the higher amount of RGB pixels of LiDAR videos (even when using non-HQ LiDAR videos); here is an overview of resolutions of the RGB images:

Sensor type Resolution [px] Num. pixels [num pixels] / [num pixels of FaceID]
FaceID 640 x 480 307 200 1.00
non-HQ LiDAR 960 x 720 691 200 2,25
HQ LiDAR 1920 x 1440 2 764 800 9.00

As can be seen, even the non-HQ LiDAR videos have 2.25x more pixels than the FaceID videos. It doesn't matter how many particles are specified in the VFX graph, because for each frame of the animation, the full RGB and Depth images are loaded by the C# code from the r3d file, and then the two images are post-processed into a 3D point cloud buffer (to be precise, there are 2 buffers: one for color and the other for 3D position). When you specify 1000 particles in the VFX graph, then the VFX graph system "picks" just those 1000 particles from that point cloud buffer. The buffer contains data for all the pixels though, regardless of how many particles does the VFX graph system eventually samples from the buffer.

This process takes place for every video frame that is loaded. The whole JPG image must be decoded into raw RGB pixel values and their 3D positions must be computed. When the resolution of the RGB and Depth images is not the same (which is the case with all LiDAR videos), then the depth value must be interpolated from the 256x192 px depth image for each pixel of the RGB image (this adds further computational burden to the CPU). The depth interpolation step does not occur if the RGB and Depth images have the same resolution (which is the case for FaceID videos).

This decompression of JPG image and depth image, together with the computation of the 3D positions for each valid pixel, takes place in the DecompressFrame() method provided by the record3d_unity_playback library, which is stored in the Assets/Scripts/Record3D/Playback folder, and whose source code you can download here.

I would highly recommend that you first measure how much CPU time does it actually take to run the DecompressFrame() function from the C# code, so that you can determine whether it really is the bottleneck — try measuring the time for all types of Record3D videos (FaceID, non-HQ and HQ LiDAR) and see if the change in duration corresponds to how much the FPS drops.

You could measure the execution time of DecompressFrame() like suggested in this SO answer:

Stopwatch sw = Stopwatch.StartNew();

DecompressFrame(jpgBuffer,
    (uint)jpgBuffer.Length,
    lzfseDepthBuffer,
    (uint)lzfseDepthBuffer.Length,
    this.rgbBuffer,
    this.positionsBuffer,
    this.width_, this.height_,
    this.fx_, this.fy_, this.tx_, this.ty_);

sw.Stop();

Console.WriteLine("Time taken: {0}ms", sw.Elapsed.TotalMilliseconds);

If you will see that DecompressFrame() is really the bottleneck, then you could do one of there things (or combination of them):

  • Lower the amount of loaded pixels; you could e.g. decide to resize the RGB image or sample every 2nd of 4th, or … pixel.
  • Parallelize loading of RGB and Depth images in DecompressFrame(); the RGB and Depth could be loaded concurrently. Although most of the time is likely spent in decompressing the JPEG RGB image since the Depth image is small for LiDAR videos.
  • Parallelize generation of the particle poses buffer in DecompressFrame().

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants