diff --git a/docs/assets/character/README.md b/docs/assets/character/README.md index 29fbcd1..0347b89 100644 --- a/docs/assets/character/README.md +++ b/docs/assets/character/README.md @@ -4,118 +4,139 @@ sidebar_position: 10 # Character -Warudo supports the standard [VRM ](https://vrm.dev/en/univrm/)format. If your model format is not VRM (for example, it is a MMD or a VRChat model), you can use the [Mod SDK](../../modding/mod-sdk.md) to export it to `.warudo` format and load it into Warudo. +Characters are the core of Warudo. Warudo supports the standard [VRM](https://vrm.dev/en/univrm/) model format. If your model format is not VRM (for example, it is a MMD or a VRChat model), you can use the [Mod SDK](../../modding/mod-sdk.md) to export it to `.warudo` format and load it into Warudo. -By default, character files should be placed in the `Characters` subfolder of Warudo's data folder. +Character files should be placed in the `Characters` subfolder of Warudo's data folder. :::info -Can't find the data folder? Click "🚀" -> "Open Data Folder". +Can't find the data folder? Click **Menu -> Open Data Folder** to open it. ::: -## Properties +## Motion Capture {#mocap} -* Source: The path to the model file. +To set up motion capture for your character, please refer to the [Motion Capture](../mocap/overview#setup) page. Once you have set up motion capture, you can calibrate tracking and access specific parts of the tracking blueprints directly in the character asset. -### Motion Capture +![](pathname:///doc-img/en-character-1.png) +
Quick calibration and blueprint navigation panels.
-Refer to the [motion capture section](/docs/tutorials/mocap/body-tracking) for details. +## Expressions {#expressions} -* Setup Motion Capture: Select the [face tracking](../../mocap/face-tracking.md) and [pose tracking ](../../mocap/body-tracking.md)template to automatically generate the mocap [blueprints](/docs/mocap/blueprints/overview). +Expressions are a way to control the character's facial expressions on top of your face tracking. The character asset supports both VRM expressions (also known as [VRM BlendShapeClips](https://vrm.dev/en/univrm/blendshape/univrm\_blendshape.html)) and custom expressions. + +During onboarding, Warudo will automatically import all of the VRM expressions in your character, and generate a blueprint to add hotkeys for them. You can also import VRM expressions manually by clicking **Import VRM Expressions**, and generate a blueprint for key binding by clicking **Generate Key Binding Blueprint**. + +:::tip +If you are using a `.warudo` model, but there is a `VRMBlendShapeProxy` component on the GameObject, then Warudo will still be able to import the VRM expressions. +::: + +You can expand each expression in the expression list to preview it (**Enter Expression**) and customize its settings. + +![](pathname:///doc-img/en-character-2.png) +Expanding an expression.
+ +You can set the **Name** of the expression, which is used for identification in the blueprint. You can also set the **Layer** of the expression. When switching expressions, expressions on the same layer will fade out, while expressions on different layers can overlap. :::info -Think of blueprints as programs that apply the mocap data on your avatar. Read more [here](../../blueprints/blueprints-intro.md). +When importing VRM expressions, the default BlendShapeClips (i.e. Joy, Angry, Sorrow, Fun/Surprised) in the VRM model will be placed on layer 0, while the BlendShapeClips added by the modeler will be placed on layer 1, 2, 3, etc. ::: -* Remove Motion Capture: Remove the assets and blueprints created in a previous motion capture setup. +If an expression already closes your model's eyes, you may want to **Disable Eye Blink Tracking** to prevent the eyes from closing further. Similarly, the **Disable Mouth Tracking** and **Disable Brow Tracking** options can be used to prevent the mouth and brow blendshapes from being controlled by face tracking. -### Expressions +The **Target BlendShapes** list contains all of the blendshapes that will be controlled when this expression is active. You can add or remove blendshapes from this list, and customize the **Target Value**, **Enter Duration**, **Enter Easing**, **Enter Delay**, **Exit Duration**, **Exit Easing**, and **Exit Delay** for each blendshape. -Refer to the [expressions](blendshape-expression.md) subpage for details. +By default, Warudo uses a 0.4s fade-in and fade-out duration when switching expressions. You can customize the **Enter Duration** and **Exit Duration** for each blendshape in **Target BlendShapes**; alternatively, if you want to disable fading all together, you can enable the **Is Binary** option. -### Animation +### Trigger Conditions {#trigger-conditions} -* Save Animation Profile: Save the following settings as an animation profile. Animation profiles can be applied on any character in any scene. -* Load Animation Profile: Load an existing animation profile. Following settings will be overridden. -* Idle Animation: The animation played by default. +Expressions are usually triggered by hotkeys, but you can also trigger an expression using just face tracking! -:::tip -**Warudo has over** [**500 built-in idle animations**](../../misc/idle-animations.md), be sure to try them out! +To set up, add a new entry to the **Trigger Conditions** list. Set up the **Conditions** for your expression trigger, and customize other options as needed. For example, the below settings will trigger the expression when the `mouthSmileLeft` and `mouthSmileRight` blendshapes are both greater than 0.5. + +![](pathname:///doc-img/en-character-4.png) + +You can have more than one entry in the **Trigger Conditions** list, and when any of the **Conditions** in an entry are met, the expression will be triggered. + +:::info +Note the conditions are based on model blendshapes, not face tracking blendshapes. If you are using an ARKit-compatible model, you can simply select the ARKit blendshapes as the conditions. Otherwise, you may need to find the corresponding model blendshapes that your face tracking uses. ::: -Some pose tracking templates(e.g. MediaPipe, RhyLive)support blending the mocap with idle animations, achieving the following effect: +### Advanced -Constraining the `JawOpen` blendshape.
+ +The **Toggle GameObjects** list contains all the child GameObjects that will be toggled when this expression is active. For example, you can use this to toggle the visibility of a particle effect when the character is smiling. + +Finally, below the expression list, you will find **Default BlendShapes** and **Default Material Properties**, which are used when neither any expression nor face tracking is active. + +## Animation {#animation} + +Warudo supports playing animations on your character. Better yet, if you are using a compatible upper body tracking blueprint (e.g., [MediaPipe](../mocap/mediapipe.md), [RhyLive](../mocap/rhylive.md)), animations are automatically blended with motion capture, allowing you to interact with your audience in fun or intimate ways. + + +Here, bending of the upper body comes from the idle animation, while the head and hand movements come from motion capture. When tracking of the hands is lost, the avatar's hands smoothly transitions into the idle animation.
+ +In Warudo, there are three types of animations: -* Override Hand Poses: Override the pose of the model's hands, for example, making a "✌" gesture. - * Left Hand / Right Hand: - * Pose: The pose that this hand should make. - * Weight: The weight of the pose, that is, how much the hand should blend into the selected pose. +* **Idle Animation**: The default animation that plays when no other animations are playing. + + ![](pathname:///doc-img/en-character-5.png) +Selecting an idle animation.
+* **Overlaying Animations**: Animations overlaid on top of the idle animation. + + For example, let's say you like the "Cat" animation and want to use it. However, you also want your character to sit down on a bed. To combine the two animations, you can set **Idle Animation** to a sitting animation, while adding a new entry to the **Overlaying Animations** list, setting the **Animation** to "Cat", and click **Mask Upper Body** to ensure the "Cat" animation only affects the upper body. + + ![](pathname:///doc-img/en-character-6.png) +Combining the "Cat" animation with a sitting animation using overlaying animations.
+ + ![](pathname:///doc-img/en-character-7.png) +The "Cat" animation only affects the upper body.
+ + You can add as many overlaying animations as you want; each layer of animation can be assigned its own **Masked Body Parts**, **Weight** and **Speed**, allowing different parts of the body to play different animations. The lower an animation is in the list, the higher its priority. + + +Demo of overlaying animations. Source: https://www.bilibili.com/video/BV1Zt4y1c7Re
+* Transient animations: One-shot animations that are triggered by blueprints——for example, a short dance animation that is played when you receive a Twitch redeem. Check out our [blueprint tutorials](../blueprints/overview) for more details. + +In addition to body animations, you can also set up **Breathing Animation** and **Swaying Animation**, as well as **Override Hand Poses**, i.e., setting the pose of the character's hands to a specific pose, like a peace sign. + +Finally, you can use **Additional Bone Offsets** to make small adjustments to the character's posture; for example, if you want to make the character's head tilt a few degrees to one side. :::info -In the built-in [motion capture templates](mocap/overview), the priority of the idle animation, overlay animations, override hand poses, and [body IK](./#body-ik) are all **lower** than motion capture. +By default, the priority of the idle animation, overlay animations, override hand poses, and [body IK](./#body-ik) are all **lower** than motion capture. -In other words, even if you have set an override hand pose for your right hand, if you raise your right hand and it becomes tracked, the model follows your right hand's pose instead of the override hand pose. +In other words, even if you have set an override hand pose for your right hand, if your right hand is being tracked, say by MediaPipe, the model follows your right hand's pose instead of the override hand pose. ::: -* Breathing Animation: Add breathing to the upper body. -* Swaying Animation: Natural back-and-forth swaying of various parts of the body. -* Overlay Animations Transition Time: The duration of the transition when the overlay animation and override hand pose properties are updated. -* Additional Bone Offsets: If you want to make the head (or any joint of the body) always tilt a few degrees to one side, you can use this. - * Bone: The bone to offset. - * Rotation Offset: The angle of rotation on the X, Y, Z axes of the bone. +## Look At IK {#look-ik} -### Look IK +Look At IK is used to make the character look at a specified target in the scene, like the camera. This is useful for capturing the perfect selfie! :::tip -**What is IK?** IK stands for Inverse Kinematics, and in game engines and Warudo, IK can be understood as "making a part of the model rotate towards, or reach a desired position", without requiring an animation made by an animator. See below: - -![](pathname:///doc-img/zh-assets-character.gif) -Source:https://medium.com/unity3danimation/overview-of-inverse-kinematics-9769a43ba956
+If you just want to maintain eye contact with your audience, we recommend to enable **Look At** when you set up face tracking instead (it is by default enabled). See [Customizing Face Tracking](../mocap/face-tracking) for more details. ::: -Make the character look at a specified target in the scene (e.g. camera). +To use Look At IK, simply enable it and set **Target** to the target that the character should look at. You can adjust the different weights to control how much the character's head, eyes, and body should rotate to look at the target. The **Clamp Weight** options can be used to limit the rotation of the head, eyes, and body. The greater they are, the smaller the range of rotation. -* Target: The target to look at. Must be an entity in the scene, such as a [camera](../camera.md), [prop](../prop.md), [anchor](../anchor.md), etc. -* Overall Weight: 0 means the character looks in the original direction. 1 means the character looks at the specified target. -* Eye Weight: How much the eyes should be looking at the specified target. -* Head Weight: How much the head should be turning to the specified target. -* Body Weight: How much the body should be turning to the specified target. -* Clamp Eye Weight: The greater it is, the smaller the range of movement of the eyes. -* Clamp Head Weight: The greater it is, the smaller the range of rotation of the head. -* Clamp Body Weight: The greater it is, the smaller the range of rotation of the body. +## Body IK {#body-ik} -### Body IK +Body IK is used to make the character's spine or limbs follow a specified target in the scene. -Make the character's spine or limbs follow a specified target in the scene. +![](pathname:///doc-img/zh-assets-character.gif) +Body IK in a nutshell.
-* IK Target: The target that this part of the body should follow. Must be an entity in the scene, such as a [camera](../camera.md), [prop](../prop.md), [anchor](../anchor.md), etc. -* Create Temporary IK Target Anchor At Current Position: Create a temporary anchor asset at the current position of this part of the body, set the IK target to this anchor, and gradually fade in the IK. -* Remove Temporary IK Target Anchor: Remove the temporary anchor asset created for this part of the body, and gradually fade out the IK. +To use Body IK, simply enable it for the spine or a hand/foot, and set **IK Target** to the target that the character should follow. The IK target is usually a [camera](../camera.md), [prop](../prop.md), or an [anchor](../anchor.md). If you want to create a temporary anchor at the current position of the body part, you can click **Create Temporary IK Target Anchor At Current Position**. When you are done, you can click **Remove Temporary IK Target Anchor** to remove the temporary anchor. :::tip Creating a temporary anchor allows you to fix the model's hands in an ideal position, preventing the hands from drifting left and right due to head movements. This can be quite useful for some poses. +Bend goal disabled.
+Bend goal enabled
+Additional bone offsets disabled.
+Additional bone offsets enabled:LeftShoulder: (10, 0, -10), RightShoulder: (10, 0, 10).
+No hand tracking required!
## Setup -To use the keyboard and touchpad, simply enable the options when [setting up the character's motion capture](character/#motion-capture): +To use keyboard/trackpad, simply enable **Use Keyboard/Trackpad** during the motion capture setup. Please refer to the [Customizing Pose Tracking](../mocap/body-tracking) page for more details. -![](pathname:///doc-img/zh-keyboard-2.webp) - -:::danger -When the keyboard is enabled, please avoid entering sensitive information such as passwords. +:::info +Currently, keyboard/trackpad are only supported for upper body tracking setups. ::: - -The keyboard and touchpad can be turned off separately: - -![](pathname:///doc-img/zh-keyboard-3.webp) - -Advanced settings can be found in the "Generate Input Interfaces Motion" node in the generated pose tracking blueprint: - -![](pathname:///doc-img/zh-keyboard-4.webp) diff --git a/docs/assets/overview.md b/docs/assets/overview.md new file mode 100644 index 0000000..dc9ae92 --- /dev/null +++ b/docs/assets/overview.md @@ -0,0 +1,27 @@ +--- +sidebar_position: 0 +--- + +# Overview + +Assets are the building blocks of Warudo. They are the components that make up your scene. For example, a character is an asset, a prop is an asset, and a camera is also an asset. Assets can be referenced in [blueprints](../blueprints/overview) to add functionality to your scene. + +## Basics + +In the **Assets** tab, you can see a list of all assets in your scene. Below is the toolbar for the **Assets** tab: + +![](pathname:///doc-img/en-assets-1.png) +The toolbar.
+ +The toolbar has the following buttons: +* **Add Asset**: Add a new asset to the scene. +* **Remove Asset**: Remove the selected asset from the scene. +* **Duplicate Asset**: Duplicate the selected asset. +* **Rename Asset**: Rename the selected asset. +* **Import Asset**: Import an asset from a JSON string and add it to the scene. +* **Import Into Selected Asset**: Import an asset from a JSON string and merge it into the selected asset. +* **Export Asset**: Export the selected asset to a JSON string. + +:::tip +You can use the import/export feature to copy assets between scenes. +::: diff --git a/docs/blueprints/controlling-character.md b/docs/blueprints/controlling-character.md new file mode 100644 index 0000000..a5be2ce --- /dev/null +++ b/docs/blueprints/controlling-character.md @@ -0,0 +1,14 @@ +# Controlling Character + + +### **I want to set up a hotkey or toggling meshes (clothing, accessories, etc.).** + +To do this, create a new blueprint (or open an existing one) and setup like below: + +![](pathname:///doc-img/en-assets-character-1.webp) + +### I want to set up a hotkey for switching the idle animation. + +To do this, create a new blueprint (or open an existing one) and setup like below: + +![](pathname:///doc-img/en-assets-character-2.webp) diff --git a/docs/misc/normalizing-model-bones.md b/docs/misc/normalizing-model-bones.md new file mode 100644 index 0000000..c54e540 --- /dev/null +++ b/docs/misc/normalizing-model-bones.md @@ -0,0 +1,9 @@ +--- +sidebar_position: 30 +--- + +# Normalizing Model Bones + +:::info +You can check if a FBX model has normalized bones by importing it into Unity and checking if the bone transforms have zero rotation. +::: diff --git a/docs/tutorials/faq.md b/docs/misc/optimizing-performance.md similarity index 71% rename from docs/tutorials/faq.md rename to docs/misc/optimizing-performance.md index e812099..df62832 100644 --- a/docs/tutorials/faq.md +++ b/docs/misc/optimizing-performance.md @@ -1,9 +1,8 @@ --- -sidebar_position: 20 +sidebar_position: 10 --- - -# Frequently Asked Questions +# Optimizing Performance ## Warudo's frame rate drops dramatically when running simultaneously with other 3D applications/games (e.g., Apex Legends). @@ -32,13 +31,3 @@ Increased GPU priority for Warudo can in turn result in decreased frame rates fo :::info On an Intel i9-9900k + NVIDIA RTX 2080 Ti configuration, running Warudo at 1080p resolution (with a NiloToon-rendered model and a high-precision 3D [environment ](../assets/environment.md)as the background) and playing Elden Ring at high graphics settings, with OBS recording at 1080p, the frame rate stays stable at around 45 FPS. ::: - -## I have selected a camera for motion capture, but my camera does not turn on / camera feed is black. - -Some antivirus software may block programs that are not on their whitelist from accessing the camera. Please disable the camera protection function of your antivirus software, or add Warudo to the whitelist. - -Also note that Warudo currently does not support IP cameras (such as [DroidCam](https://play.google.com/store/apps/details?id=com.dev47apps.droidcam)). - -## I need other help! - -Please join our [Discord](https://discord.gg/Df8qYYBFhH), or contact [tiger@warudo.app](mailto:tiger@warudo.app). diff --git a/docs/mocap/body-tracking.md b/docs/mocap/body-tracking.md index 9368290..c12e1e2 100644 --- a/docs/mocap/body-tracking.md +++ b/docs/mocap/body-tracking.md @@ -2,12 +2,53 @@ sidebar_position: 30 --- -# Pose Tracking +# Customizing Pose Tracking -Warudo provides 5 pose tracking solutions: +## Blueprint Customization -* [**MediaPipe**](mediapipe.md): Webcam-based upper body tracking. Advantages include a wider range of motion (e.g. forward/backward hand movements) and more responsive tracking (e.g. punches). Drawbacks include the need for some time to set up and calibrate. **Most expressive.** -* [**RhyLive**](rhylive.md): Upper body tracking developed by [RhythMo](https://rhythmo.cn/). Requires a [Face ID](https://support.apple.com/en-us/HT208109)-compatible iOS device and the free [RhyLive](https://apps.apple.com/us/app/rhylive/) app. -* [**VMC**](vmc.md): Body tracking via an external application, which sends data to Warudo via the [VirtualMotionCapture protocol](https://protocol.vmc.info/english). Compatible with all VR mocap (such as SlimeVR or Mocopi), via [VirtualMotionCapture ](https://vmc.info/)and some VMC-compatible studio mocap. -* [**Rokoko**](rokoko.md): Body tracking via [Rokoko Studio](https://www.rokoko.com/products/studio). -* [**Xsens MVN**](xsens-mvn.md): Body tracking via [Xsens MVN Analyze/Animate](https://base.xsens.com/s/motion-capture-mvn-software?language=en\_US). +During the onboarding process, you can click **Customize Pose Tracking...** to customize the tracking blueprint. + +![](pathname:///doc-img/en-mocap-4.png) +Customizing pose tracking.
+ +### Upper Body Trackers + +For upper body trackers (e.g., MediaPipe, RhyLive, Leap Motion Controller), the following options are available: + +* **Use Keyboard/Trackpad:** Whether to generate animations for the character using a keyboard or trackpad, even when the hands are not tracked. + + ![](pathname:///doc-img/zh-keyboard-1.webp) + + After onboarding, you can adjust the keyboard/touchpad settings by clicking **Character → Motion Capture → Blueprint Navigation → Input Interfaces Animation Settings**. + + The keyboard and touchpad can be turned off separately by disabling either asset: + + ![](pathname:///doc-img/zh-keyboard-3.webp) + :::danger + When the keyboard is enabled, please avoid entering sensitive information such as passwords. + ::: +* **Track Fingers Only:** If enabled, only the fingers will be tracked by this tracking system. This is useful if you are using a full-body pose tracking system that already tracks the hands. + +### Full Body Trackers + +For full body trackers, the following options are available: + +* **Track Full Body / Tracked Body Parts:** If **Track Full Body** is set to **No**, only the body parts that are selected in **Track Full Body** will be tracked by this tracking system. This is useful if you are using multiple full-body pose tracking systems, and want to use different systems to track different body parts. + +## Tracker Customization + +After onboarding, you can go to the corresponding pose tracking asset (e.g., MediaPipe Tracker, VMC Receiver) to customize the tracking data itself. The following options are available: + +* **Mirrored Tracking:** If enabled, Warudo will mirror the tracking data. +* **Head Rotation Intensity/Offset:** Adjust the intensity and offset of the head rotation. + +## Frequently Asked Questions {#FAQ} + +### How can I combine multiple pose tracking systems? + +If you want to set up a secondary pose tracking, you can use the **Secondary Pose Tracking** option during the onboarding process; this is helpful if you are using VR trackers/Mocopi for primary tracking and would like to use [MediaPipe](../mocap/mediapipe.md) or [Leap Motion Controller](../mocap/leap-motion.md) or [StretchSense Gloves](../mocap/stretchsense.md) for added-on finger tracking. + +![](pathname:///doc-img/en-getting-started-8.png) +Setting up secondary pose tracking.
+ +If you want to combine more than two pose tracking systems, you can set up the primary pose tracking first using the onboarding assistant, and then set up the subsequent pose tracking systems using **Character → Setup Motion Capture** with **Track Full Body** set to **No** and **Tracked Body Parts** set to the specific body parts you want to track with that system. diff --git a/docs/mocap/chingmu.md b/docs/mocap/chingmu.md new file mode 100644 index 0000000..a257c76 --- /dev/null +++ b/docs/mocap/chingmu.md @@ -0,0 +1,53 @@ +--- +sidebar_position: 520 +--- + +# Chingmu Avatar + +:::info +This feature is only available in [Warudo Pro](../pro). +::: + +Body tracking via [Chingmu Avatar](https://www.chingmu.com/software/downloaddetail_7.shtml). Requires access to a [Chingmu](https://chingmu.com) optical tracking system. + +In addition to character tracking, prop tracking is also supported. For example, you may want to track a chair or a handheld camera using your optical tracking system and stream the motion data to Warudo, animating a chair prop or camera in Warudo accordingly. + +## Setup + +### Character Tracking + +Open Chingmu Avatar and load the FBX file of your character, and characterize it. **The model must have normalized bones, i.e., all bones on the model should have zero rotation (0, 0, 0) when the model is in T-pose.** The model should also have identical bone hierarchy as the character model loaded in Warudo. + +:::info +You can check if a FBX model has normalized bones by importing it into Unity and checking if the bone transforms have zero rotation. +::: + +Make sure **VRPN Livestream** (启用Vrpn) is checked, along with **Rigidbodies** (刚体), **Bones** (骨骼), and **Motion Retargeting** (重定向). + +![](pathname:///doc-img/en-chingmu-1.png) + +If you have multiple characters in the scene, in Warudo, set **Chingmu Avatar Receiver → Object ID** to the index of the character that you would like to track. The index starts from 0. + +If you are running Chingmu Avatar on a different PC, you need to enter the IP address of the PC running Chingmu Avatar in Warudo's **Settings → Chingmu → VRPN Connection String**. + +### Prop Tracking + +In Warudo, create a new **Chingmu Avatar Rigidbody Receiver** asset and set **Rigidbody ID** to the ID of the rigidbody that you would like to track. Then, for **Target Asset**, select the Warudo prop/camera that you would like to control. + +:::tip +If you would like to access prop tracking data in blueprints, you can use the **Get Chingmu Avatar Rigidbody Receiver Data** node. +::: + +## Frequently Asked Questions + +Please refer to [Overview](overview#FAQ) and [Customizing Pose Tracking](pose-tracking#FAQ) for common questions. + +### My character is in extremely weird poses. + +First, make sure you have loaded the same model in Warudo and Chingmu Avatar. + +We also require the models to have normalized bones, i.e., all bones on the model should have zero rotation (0, 0, 0) when the model is in T-pose. Please refer to [this page](../misc/normalizing-model-bones) for more details. + +### My character is not moving. + +You can try click **Settings → Chingmu → Restart Chingmu Plugin**. If this does not work, try restarting Chingmu Avatar and/or Warudo. diff --git a/docs/mocap/customization.md b/docs/mocap/customization.md deleted file mode 100644 index d20608b..0000000 --- a/docs/mocap/customization.md +++ /dev/null @@ -1,23 +0,0 @@ ---- -sidebar_position: 110 ---- - -# Customization - -Warudo's motion capture is implemented using [blueprints](/docs/mocap/blueprints/overview), and they are highly customizable. When you [setup the character's motion capture](../assets/character/#motion-capture), Warudo automatically generates blueprints based on your selections. For example, if you have chosen iFacialMocap + MediaPipe, two blueprints are generated: "Face Tracking - iFacialMocap" and "Pose Tracking - MediaPipe". - -## Adjust smoothing - -If you find your tracking has frame drops or is way too "smooth" thus unable to track fast movements, you may adjust the smoothing parameter in the mocap blueprint. All motion capture blueprints contain one or more nodes to smooth the data. For example, the following is the default iFacialMocap face tracking blueprint: - -![](pathname:///doc-img/zh-custom-1.webp) - -The "Smooth Rotation List" node corresponds to the smoothing of the character's bone rotations, while the "Smooth Position" node corresponds to the smoothing of the character's root position. You can adjust the "Smooth Time" property on these nodes to make the tracking smoother or more responsive. - -## Remapping blendshapes - -You can remap blendshapes if your character model's blendshape names do not follow the standard VRM/ARKit naming conventions. For more information, please refer to [this page](../blueprints/mocap-nodes.md). - -## Live2D-esque eye physics - -With the "Float Pendulum Physics" node, you can implement Live2D's eye physics for your character model with little effort. For more information, please refer to [this page](../blueprints/example-live2d-physics.md). diff --git a/docs/mocap/face-tracking.md b/docs/mocap/face-tracking.md index 899fe29..07ed2ca 100644 --- a/docs/mocap/face-tracking.md +++ b/docs/mocap/face-tracking.md @@ -2,16 +2,65 @@ sidebar_position: 20 --- -# Face Tracking +# Customizing Face Tracking -Warudo provides 5 face tracking solutions: +## Blueprint Customization -* [**OpenSeeFace (Beta)**](openseeface.md): Webcam-based face tracking. Tracks a set of basic blendshapes, head rotation and head translation. Quality of face tracking is similar to that of [VSeeFace](https://www.vseeface.icu/). -* [**iFacialMocap**](ifacialmocap.md): ARKit-based face tracking. Requires a [Face ID](https://support.apple.com/en-us/HT208109)-compatible iOS device and the $5.99 [iFacialMocap](https://apps.apple.com/us/app/id1489470545) app. Tracks 52 ARKit blendshapes, head rotation and head translation. **Most accurate and expressive.** -* [**RhyLive**](rhylive.md): ARKit-based face tracking. Requires a [Face ID](https://support.apple.com/en-us/HT208109)-compatible iOS device and the free [RhyLive](https://apps.apple.com/us/app/rhylive/) app. Tracks 52 ARKit blendshapes and head rotation. **Most accurate.** -* [**VMC**](vmc.md): Face tracking via an external application, which sends data to Warudo via the [VirtualMotionCapture protocol](https://protocol.vmc.info/english). VMC is typically not used for face tracking data. -* [**Rokoko**](rokoko.md): Face tracking via [Rokoko Studio](https://www.rokoko.com/products/studio). +During the onboarding process, you can click **Customize Face Tracking...** to customize the tracking blueprint. -Upcoming face tracking solutions in development: +![](pathname:///doc-img/en-mocap-3.png) +Customizing face tracking.
-* **NVIDIA Maxine:** Webcam-based face tracking. Requires a NVIDIA RTX series graphics card (or any Turing GPU). Tracks 51 ARKit blendshapes. +The following options are available: + +* **BlendShape Mapping:** Select the blendshape mapping that matches your model. For example, if your model has "Perfect Sync"/ARKit blendshapes, select **ARKit**; if your model is converted from a MMD model, select **MikuMikuDance**; otherwise, select **VRM**. By default, Warudo will try to automatically detect the blendshape mapping, but you can override it here. +* **Enable Head and Body Movements:** If enabled, Warudo will move your character's head and body according to the tracking data. If disabled, only the face (blendshapes and eye bones) will be animated. This is automatically set to **No** by the onboarding assistant if you use a full-body pose tracking system that already tracks the head and body. +* **Idle Head Animation (Auto Blinking / Auto Eye Movements / Auto Head Movements):** If enabled, Warudo will add subtle head motion, eye movement, and blinking to your character. +* **Look At:** If enabled, your character will look at a target (default to the camera) when you look forward. This helps your character / you maintain eye contact with the audience regardless of the camera position, while still allowing your character / you to look around freely. +* **Lip Sync**: If enabled, Warudo will animate your character's mouth based on your speech. You can choose to enable lip sync only when tracking is lost, or always enable it. + +:::info +These options above affect the generated blueprint; therefore, to change these options after the setup, you need to re-run the setup, or manually modify the face tracking blueprint. **Character → Motion Capture → Blueprint Navigation** provides shortcuts to the specific parts of the blueprint that you may want to modify: + +![](pathname:///doc-img/en-mocap-6.png) +::: + +## Tracker Customization + +After onboarding, you can go to the corresponding face tracking asset (e.g., iFacialMocap Receiver, MediaPipe Tracker) to customize the tracking data itself. The following options are available: + +* **Mirrored Tracking:** If enabled, Warudo will mirror the tracking data. +* **[BlendShape](../tutorials/3d-primer#blendshape) Sensitivity:** Adjust the sensitivity of the character's facial expressions. If your expressions are too subtle, increase the sensitivity; if your expressions are too exaggerated, decrease the sensitivity. +* **Configure BlendShapes Mapping:** Adjust each individual blendshape's threshold and sensitivity. This is useful if you want to disable certain blendshapes, or if you want to adjust the sensitivity of each blendshape individually. + ![](pathname:///doc-img/en-mocap-5.png) + The left two values are the range of the input blendshape value from the tracker, and the right two values are the range of the output blendshape value to the character. For example, if the input range is 0-1 and the output range is 0-2, then when the input blendshape value is 0.40, the output blendshape value will be 0.80. + - To disable a blendshape, set the top right value to 0. + - To make a blendshape more sensitive, increase the top right value; to make a blendshape less sensitive, decrease the top right value. + - To trigger a blendshape at a higher threshold (e.g., your character mouth is already slightly opened but your mouth is still closed), increase the bottom right value. To trigger a blendshape at a lower threshold, decrease the bottom right value. +* **Head Movement/Rotation Intensity:** Adjust the intensity of the head movement/rotation. +* **Body Movement Intensity:** Adjust the intensity of the automatic body movement due to head movement. +* **Eye Movement Intensity:** Adjust the intensity of the pupil movement. +* **Eye Blink Sensitivity:** Adjust the sensitivity of the eye blink; this is a shortcut for adjusting the sensitivity of the eye blink blendshape in **Configure BlendShapes Mapping**. +* **Linked Eye Blinking:** If enabled, force both eyes to blink at the same time. Useful if your tracking is not accurate enough to blink each eye individually. +* **Use Bones/BlendShapes for Eye Movement:** Whether to use eye bones or blendshapes for eye movement. If your model has eye bones, it is recommended to use eye bones, as they are more accurate and allow for [IK](../tutorials/3d-primer#IK). There are two cases where you may want to enable **Use BlendShapes For Eye Movement**: + - Your model does not have eye bones. + - Your model's eye blendshapes are supplementary to the eye bones, i.e., the eye blendshapes change the shape of the eyes, in addition to the eye bones moving the pupils. (Your modeler should be able to tell you whether this is the case.) + +## Frequently Asked Questions {#FAQ} + +### When I move my head, my character's body also moves. How do I disable this? + +This is caused by the **Body Movement Intensity** option in the motion capture receiver asset. Set it to 0 to disable body movement. + +### How do I enable lip sync? + +During the onboarding process, you can enable lip sync by clicking **Customize Face Tracking...** and enabling **Lip Sync**. + +You can adjust lip sync settings after onboarding by clicking **Character → Motion Capture → Blueprint Navigation → Lip Sync Settings**. + +### My model's mouth is slightly open even when my mouth is closed. + +This is usually caused by applying ARKit tracking on a non-ARKit-compatible model. To fix this, you can either: + +* Add ["Perfect Sync" / ARKit blendshapes](../tutorials/3d-primer#arkit) to your model. This is recommended since ARKit-based face tracking is much more expressive, and you already have the tracking data. +* Click **Configure BlendShapes Mapping** in the tracker asset, and increase the threshold (i.e., set the bottom right value to a negative number like `-0.25`) of the ARKit mouth blendshapes `jawOpen`, `mouthFunnel`, `mouthPucker`. diff --git a/docs/mocap/ifacialmocap.md b/docs/mocap/ifacialmocap.md index 049e3d1..ef3421a 100644 --- a/docs/mocap/ifacialmocap.md +++ b/docs/mocap/ifacialmocap.md @@ -1,49 +1,55 @@ --- -sidebar_position: 40 +sidebar_position: 60 --- -# iFacialMocap +# iFacialMocap/FaceMotion3D ARKit-based face tracking. Requires a [Face ID](https://support.apple.com/en-us/HT208109)-compatible iOS device and the $5.99 [iFacialMocap](https://apps.apple.com/us/app/id1489470545) app. Tracks 52 ARKit blendshapes, head rotation and head translation. +An alternative to iFacialMocap is [FaceMotion3D](https://apps.apple.com/us/app/facemotion3d/id1507538005), which is $14.99 for a permanent license. It has more features than iFacialMocap, but in regard to face tracking for Warudo, the two apps are functionally equivalent. + +:::tip +For best tracking quality, we recommend using an iPhone 12 or newer (iPhone mini also works). Older iPhones may have lower tracking quality. +::: + :::info -iFacialMocap is actually very simple in its offerings and there are free alternatives like [waidayo](https://apps.apple.com/us/app/waidayo/id1513166077) that provide ARKit-based facial tracking with the same level of accuracy. (You can even write your own using[ Unity](https://docs.unity3d.com/Packages/com.unity.xr.arkit-face-tracking@1.1/manual/index.html).) However, iFacialMocap is one of the earliest and most widely used apps in this category, so Warudo added support for it first, and support for other similar apps will come later. +If your iPhone is not compatible with Face ID, you may still be able to use iFacialMocap / FaceMotion3D if it has an A12 or higher chip and has been updated to iOS 14. However, the tracking quality may be subpar. ::: ## Setup -Open the iFacialMocap app, tap on the gear icon in the upper right corner to enter the settings interface. Tap on "Destination IP address" and enter your computer's IP address. +:::tip +If you do not know your computer's IP, you can check on the configuration page of the **iFacialMocap Receiver** asset. -![](pathname:///doc-img/zh-ifacialmocap-1.webp) +![](pathname:///doc-img/en-ifacialmocap-1.png) -:::info -If you do not know your computer's IP, you can check on the configuration page of the "iFacialMocap Receiver". +If multiple IPs are listed, you would need to try each one. Usually, the IP address assigned to your computer by your WiFi router starts with `192.168`. For example, in the above picture, you can first try `192.168.1.151`. +::: -![](pathname:///doc-img/zh-ifacialmocap-2.webp) +### iFacialMocap -If multiple IPs are listed, you would need to try each one. Usually, the IP address assigned to your computer by your WiFi router starts with 192.168. For example, in the above picture, you can first try 192.168.1.2. -::: +Open the iFacialMocap app, tap on the gear icon in the upper right corner to enter the settings interface. Tap on **Destination IP address** and enter your computer's IP address. + +![](pathname:///doc-img/zh-ifacialmocap-1.webp) -It's recommended to enable "Optimize for long-time streaming," which can help reduce device overheating from prolonged use. +It's recommended to enable **Optimize for long-time streaming**, which can help reduce device overheating from prolonged use. ![](pathname:///doc-img/zh-ifacialmocap-3.webp) -## Properties - -* Character: Select the character to apply face tracking to. -* Port: The port used by Warudo to receive data. This can be changed in iFacialMocap -> Settings -> Details Settings -> Port. -* Calibrate: Correct the position and rotation of the head to face forward. -* Head Movement Range: The range of movement for the head. -* Head Vertical Movement: Whether or not to allow the head to move up and down. - -![](pathname:///doc-img/zh-ifacialmocap-4.webp) -When enabled, the legs will naturally bend when the head moves down.
- -* Inverted Head Forward/Backward Movement: Whether or not to invert the forward-back movement of the head. -* Head Rotation Range: The range of head rotation. -* Head Rotation Offset: The offset of head rotation. -* Eye Movement Intensity: The range of eye movement. -* Eye Blink Sensitivity: The sensitivity of the eye blinking. -* Body Movement Intensity: The range of natural body movement that follows head movement. -* Limit Eye Squint: If your model is compatible with ARKit blendshapes, the eyes of your model may not be able to completely close (or close too much) based on the preferences of the modeler. In this case, try toggling this option. -* Use BlendShapes For Eye Movement: **Requires ARKit blendshapes to be present on the model.** Controls the movement of the character's eyes using eye movement blendshapes (EyeLookInLeft, EyeLookInRight, EyeLookOutLeft, EyeLookOutRight) instead of the eye bones. If your model does not have eye bones, enable this option. +### FaceMotion3D + +Open the FaceMotion3D app, tap on the **Setting** icon in the lower left corner to enter the settings interface. Tap on **Streaming settings**. Set **Software Name** to **Other**, and enable **Compatible with iFacialMocap**. + +Go back to the main interface, tap on the **Live** button in the upper left corner, select **Other** as the software, and enter your computer's IP address. Then, tap **Connect** to start streaming. + +## Calibration + +You can calibrate iFacialMocap's tracking by: +* clicking **Character → Motion Capture → Quick Calibration → Calibrate iFacialMocap**, or +* clicking **Calibrate** in the **iFacialMocap Receiver** asset. + +During calibration, you should look straight ahead and keep your head still. After calibration, you can move your head freely. + +## Frequently Asked Questions + +Please refer to [Overview](overview#FAQ) and [Customizing Face Tracking](face-tracking#FAQ) for common questions. diff --git a/docs/mocap/leap-motion.md b/docs/mocap/leap-motion.md new file mode 100644 index 0000000..b14d535 --- /dev/null +++ b/docs/mocap/leap-motion.md @@ -0,0 +1,42 @@ +--- +sidebar_position: 70 +--- + +# Leap Motion Controller + +Hand tracking via [Leap Motion Controller](https://leap2.ultraleap.com/leap-motion-controller-2/). + +## Setup + +We recommend using a Leap Motion Controller 2 for more accurate and stable tracking. The original Leap Motion Controller is also supported. + +To connect the Leap Motion Controller to Warudo, you need to download and install the latest Gemini software from [Leap Motion's website](https://leap2.ultraleap.com/gemini-downloads/). + +Warudo supports all 3 tracking modes offered by the Leap Motion Controller: **Desktop**, **Screen Top**, and **Chest Mounted**. We recommend using the **Chest Mounted** mode along with a [neck/chest mount](https://www.etsy.com/market/leap_motion_mounting) for the best experience. + +## Calibration {#Calibration} + +There is generally no need to calibrate the Leap Motion Controller. However, you can adjust **Controller Position Offset**, **Controller Rotation Offset**, and **Controller Scale** in the **Leap Motion Controller** asset to adjust the tracking. A virtual Leap Motion Controller will be displayed in the scene to help you visualize the tracking. + +![](pathname:///doc-img/en-leapmotion-1.png) +Adjusting the virtual Leap Motion Controller.
+ +## Options + +* **Controller Position Offset**: The offset of the controller position. Positive X is a left offset, positive Y is an upward offset, and positive Z is a forward offset. +* **Controller Rotation Offset**: The offset of the controller rotation. +* **Controller Scale**: The scale of the controller. Increase this value if you want hands to be further away from the body. You can also enable **Per-Axis Scale** to scale the controller on each axis separately. +* **Fix Finger Orientations Weight**: Due to different model specifications, the finger orientations of the Leap Motion Controller may not match the finger orientations of the model. This option allows you to adjust the finger orientations of the Leap Motion Controller to match the model. 0 means no adjustment, 1 means full adjustment. Adjust this value until the fingers look natural. +* **Shoulder Rotation Weight**: How much the shoulders should be rotated. 0 means no rotation, 1 means full rotation. Adjust this value until the shoulders look natural. + +## Frequently Asked Questions {#FAQ} + +Please refer to [Overview](overview#FAQ) and [Customizing Pose Tracking](body-tracking#FAQ) for common questions. + +### The Leap Motion Tracker asset says "Tracker not started." + +Please make sure you have installed the latest [Gemini software](https://leap2.ultraleap.com/gemini-downloads/), and it is running in the background. + +### My model's wrist/fingers look unnatural. + +Try adjusting the **Fix Finger Orientations Weight** option in the **Leap Motion Tracker** asset. You may also need to adjust the **Wrist Rotation Offset** and **Global Finger Rotation Offset** options (check the left boxes to enable them). diff --git a/docs/mocap/mediapipe.md b/docs/mocap/mediapipe.md index 24286f6..725a1f2 100644 --- a/docs/mocap/mediapipe.md +++ b/docs/mocap/mediapipe.md @@ -4,50 +4,86 @@ sidebar_position: 50 # MediaPipe -Webcam-based upper body tracking. Advantages include a wider range of motion (e.g. forward/backward hand movements) and more responsive tracking (e.g. punches). Drawbacks include the need for some time to set up and calibrate. +Webcam-based face and hand tracking. For face tracking, MediaPipe tracks 40+ ARKit blendshapes, head rotation and head translation. For pose tracking, MediaPipe tracks your wrist and fingers. + +## Setup + +MediaPipe is built into Warudo, so you do not need to install any additional software. Most webcams should work with MediaPipe, but if you have trouble getting it to work, refer to the [FAQ](#FAQ) section below. + +MediaPipe can be run either on CPU or GPU. GPU is recommended for better performance; by default, MediaPipe already runs on GPU. To use CPU instead, go to the **MediaPipe Tracker** asset and disable **GPU Acceleration**. :::tip -MediaPipe is a widely-used technology used in software such as [Webcam Motion Capture](https://webcammotioncapture.info/), but Warudo's implementation enables a wider range of motion and more stable hand movements. +When to use CPU? If you are playing a game that uses a lot of GPU resources, you may want to use CPU for MediaPipe instead to avoid performance issues. ::: -## Properties +If you use MediaPipe for hand tracking, please make sure you are at the camera center and your hands are fully visible in the camera view. You can enable **Show Camera** in the **MediaPipe Tracker** asset to see the camera view; it should look like this: + +![](pathname:///doc-img/en-mediapipe-1.png) + +Usually, you should place the camera above the computer screen, with your head slightly above the center of the camera view. We also recommend calibrating hand tracking before use. See [Calibration](#Calibration) for details. + +## Calibration {#Calibration} + +### Face Tracking + +You can calibrate MediaPipe's face tracking by: +* clicking **Character → Motion Capture → Quick Calibration → Calibrate MediaPipe**, or +* clicking **Calibrate** in the **MediaPipe Tracker** asset. + +During calibration, you should look straight ahead and keep your head still. After calibration, you can move your head freely. + +### Hand Tracking + +You can calibrate MediaPipe's hand tracking by clicking **Calibrate Hand Tracking** in the **MediaPipe Tracker** asset. Raise one of your hands next to your ear with your palm facing the camera. Use your other hand to press **OK** to calibrate. -* Character: Select the character to apply body tracking to. -* Input Camera: Select the camera. **Currently, IP cameras (such as** [**DroidCam**](https://play.google.com/store/apps/details?id=com.dev47apps.droidcam\&hl=en\_US\&gl=US\&pli=1)**) are not supported.** -* Calibrate: Calibrate the position of the hands. **Calibration is recommended when using for the first time!** +![](pathname:///doc-img/en-mediapipe-2.png) +During calibration, the hands should be in a relaxed posture with palm facing the camera, with fingers slightly bent if desired.
-![](pathname:///doc-img/zh-mediapipe-1.webp) -During calibration, the hands should be in a relaxed posture, with fingers slightly bent if desired.
+If you find your hands are moving too fast or too slow, you can adjust the **Hand Movement Range** option. You can also adjust the **Hand Movement Offset** option, to move the hands closer to your head, for example. -* Show Camera: Whether to show the camera's view. -* Invert Hands: Whether to invert hands. +## Options -### Advanced +* **Hand Movement Range**: The range of motion for the hands. X is left and right, Y is up and down, and Z is forward and backward. +* **Hand Movement Offset**: The offset of the hand movement range. Positive X is a left offset, positive Y is an upward offset, and positive Z is a forward offset. +* **Arm Swivel Offset**: Rotates the elbow inwards or outwards. +* **Hand Horizontal Distance Compensation**: Brings the character's hands closer together, making it easier to make gestures like crossing fingers. +* **Hand Y Threshold**: If the distance between the wrist and the top and bottom edges of the camera view is less than this value, the tracking of that hand is ignored. This helps to avoid jitter problems caused by unstable tracking. +* **Camera Diagonal Field of View**: The field of view of the camera. An accurate value helps to estimate the depth of the hands. You can usually find the field of view listed by the manufacturer on the camera's store page. +* **Hand Push Z Max**: The hands will be pushed forward to prevent clipping through the body. This value determines the maximum distance that the hands will be pushed forward. +* **Hand Push Y Range**: The further down the hands are from the shoulders, the more they will be pushed forward. When the vertical distance from the hands to the shoulders is greater than or equal to this value, they will be pushed forward to the maximum. +* **Hand Push Y Offset**: When this value is 0, the hands start to be pushed forward from the shoulders down. When this value is positive, the hands start to be pushed forward from above the shoulders. +* **Shoulder Rotation Weight**: How much the shoulders should be rotated. 0 means no rotation, 1 means full rotation. Adjust this value until the shoulders look natural. -* Hand Movement Range: The range of motion for the hands. X is left and right, Y is up and down, and Z is forward and backward. -* Hand Movement Offset: The offset of the hand movement range. Positive X is a left offset, positive Y is an upward offset, and positive Z is a forward offset. -* Hand Horizontal Distance Compensation: Brings the character's hands closer together, making it easier to make gestures like crossing fingers. -* Hand Y Threshold: If the distance between the wrist and the top and bottom edges of the camera view is less than this value, the tracking of that hand is ignored. This helps to avoid jitter problems caused by unstable tracking. -* Camera Diagonal Field of View: The FoV of the camera. An accurate value helps to estimate the depth of the hands. You can usually find the field of view listed by the manufacturer on the camera's store page. -* Camera Aspect Ratio: Like the FoV, an accurate value helps to estimate the depth of the hands. Most cameras on the market are 16:9. -* Hand Push Z Max: The hands will be pushed forward to prevent clipping through the body. This value determines the maximum distance that the hands will be pushed forward. -* Hand Push Y Range: The further down the hands are from the shoulders, the more they will be pushed forward. When the vertical distance from the hands to the shoulders is greater than or equal to this value, they will be pushed forward to the maximum. -* Hand Push Y Offset: When this value is 0, the hands start to be pushed forward from the shoulders down. When this value is positive, the hands start to be pushed forward from above the shoulders. -* Hand Calibration Distance: The calibration parameters set when pressing "Calibrate". Should not be adjusted manually. -* Shoulder Rotation Correction: Adjusts the rotation of the shoulders to accommodate rotations of the chest and/or head in certain poses. +## Frequently Asked Questions {#FAQ} -"Shoulder Rotation Correction" disabled
+### My tracking has frozen. +If you are using a laptop, make sure it is plugged in and not running on battery. - ![](pathname:///doc-img/zh-mediapipe-3.webp) -启用「肩膀旋转校正」
-Setting up motion capture in the onboarding assistant asset.
+ +After the onboarding process is complete, you can select the relevant motion capture assets to further customize your tracking. For example, if you are using iFacialMocap, you may notice Warudo adds some movement to your character's body when your head moves. If this is not desirable, you can set **Body Movement Intensity** to 0 in the **iFacialMocap Receiver** asset. + +![](pathname:///doc-img/en-mocap-1.png) +Adjusting the body movement intensity of iFacialMocap.
+ +The second way to set up motion capture is to use **Character → Setup Motion Capture**. This allows you to set up face tracking and/or pose tracking directly. However, some checking steps in the onboarding process are also skipped. + +![](pathname:///doc-img/en-mocap-2.png) +Setting up motion capture in the character asset.
+ +This method is useful if you need to set up multiple face tracking or pose tracking systems, since the onboarding assistant always remove the existing motion capture setup when you start a new onboarding process. + +Whichever method you choose, corresponding tracking blueprints will be generated and added to your scene. For example, if you have chosen iFacialMocap for face tracking and MediaPipe for pose tracking, you will be able to see two blueprints generated: **Face Tracking - iFacialMocap** and **Pose Tracking - MediaPipe**. You can modify these blueprints to customize your tracking. + +:::caution +You should almost never create a motion capture asset manually, i.e., by clicking on **Add Asset** and selecting a motion capture asset, such as **iFacialMocap Receiver**. This is because the motion capture asset alone only receives or provides the tracking data, but does not connect the tracking data to your character, which needs to be done by blueprints. The onboarding assistant and the **Setup Motion Capture** function automatically create the necessary blueprints for you. +::: + +## Frequently Asked Questions {#FAQ} + +### My character is not moving. + +If you are using a motion capture system that requires an external application, such as iFacialMocap, make sure the application is running and the tracking data is being sent to Warudo. Also, make sure your computer's firewall is not blocking Warudo from receiving the tracking data; you may need to add Warudo to the whitelist, or temporarily disable the firewall. + +Some motion capture receivers have a **Port** option. Make sure the port number matches the port number in the external application. -## Professional +### My tracking is too smooth / too jittery. -* [Rokoko](rokoko.md) -* [Xsens MVN](mocap/xsens-mvn) -* Motion capture systems compatible with the [VMC protocol](https://protocol.vmc.info/english): ([Perception Neuron](https://github.com/emilianavt/VSeeFaceManual#perception-neuron-tracking), etc.): [VMC ](vmc.md)/ [iFacialMocap ](ifacialmocap.md)+ [VMC](vmc.md). -* For the following and other options, a private Warudo build is required. Please contact [tiger@warudo.app](mailto:tiger@warudo.app) for details. - * Motion capture systems with [VRPN ](https://github.com/vrpn/vrpn)([Chingmu](https://digi-human.com) Avatar Lite, [OptiTrack](https://optitrack.com/), etc.) - * [Manus Core](https://www.manus-meta.com/knowledge-products/manus-core) +Please go to the tracking blueprint (e.g., **Face Tracking - iFacialMocap**) and locate the **Smooth Rotation List** / **Smooth Position List** / **Smooth Transform** / **Smooth BlendShape List** nodes. Increase the **Smooth Time** value to make the tracking smoother, or decrease the value to make the tracking more responsive. diff --git a/docs/mocap/rhylive.md b/docs/mocap/rhylive.md index 832c27a..5e8e229 100644 --- a/docs/mocap/rhylive.md +++ b/docs/mocap/rhylive.md @@ -1,5 +1,5 @@ --- -sidebar_position: 70 +sidebar_position: 61 --- # RhyLive @@ -24,12 +24,12 @@ To use RhyLive on an iOS device, open the app and tap on the menu icon in the to ![](pathname:///doc-img/zh-rhylive-1.webp) -:::info -If you do not know your computer's IP, you can check on the configuration page of the "RhyLive Receiver". +:::tip +If you do not know your computer's IP, you can check on the configuration page of the **RhyLive Receiver** asset. -![](pathname:///doc-img/zh-rhylive-2.webp) +![](pathname:///doc-img/en-ifacialmocap-1.png) -If multiple IPs are listed, you would need to try each one. Usually, the IP address assigned to your computer by your WiFi router starts with 192.168. For example, in the above picture, you can first try 192.168.1.2. +If multiple IPs are listed, you would need to try each one. Usually, the IP address assigned to your computer by your WiFi router starts with `192.168`. For example, in the above picture, you can first try `192.168.1.151`. ::: :::info @@ -40,14 +40,24 @@ It is recommended to turn off "锁定弯腰" (Lock body bending) and turn on " ![](pathname:///doc-img/zh-rhylive-3.webp) -## Properties - -* Character: Select the character to apply tracking to. -* Port: The port used by Warudo to receive data. You can change it in RhyLive -> Settings -> Port. -* Calibrate: Correct the position and rotation of the head to face forward. -* Head Rotation Offset: The offset of head rotation. -* Body Movement Intensity: The range of body movement. -* Eye Movement Intensity: The range of eye movement. -* Eye Blink Sensitivity: The sensitivity of the eye blinking. -* Limit Eye Squint: If your model is compatible with ARKit blendshapes, the eyes of your model may not be able to completely close (or close too much) based on the preferences of the modeler. In this case, try toggling this option. -* Use BlendShapes For Eye Movement: **Requires ARKit blendshapes to be present on the model.** Controls the movement of the character's eyes using eye movement blendshapes (EyeLookInLeft, EyeLookInRight, EyeLookOutLeft, EyeLookOutRight) instead of the eye bones. If your model does not have eye bones, enable this option. +## Calibration + +You can calibrate RhyLive's tracking by: +* clicking **Character → Motion Capture → Quick Calibration → Calibrate RhyLive**, or +* clicking **Calibrate** in the **RhyLive Receiver** asset. + +During calibration, you should look straight ahead and keep your head still. After calibration, you can move your head freely. + +## Frequently Asked Questions + +Please refer to [Overview](overview#FAQ) and [Customizing Face Tracking](face-tracking#FAQ) for common questions. + +### I am using the "Wired" option and connected my iPhone to my computer via USB, but my character is not moving. + +Make sure [iTunes for Windows](https://support.apple.com/en-us/HT210384) is installed on your computer. If you have iTunes installed from Microsoft Store, you may need to uninstall it and install the version from Apple's website. + +Alternatively, you can try using the "Wireless" option instead. + +### I am using the "Wireless" option and make sure my computer and iPhone are connected to the same WiFi network, but my character is not moving. + +Please try to toggle off and on the "无线" (Wireless) switch in the RhyLive app, or restart the app. diff --git a/docs/mocap/rokoko.md b/docs/mocap/rokoko.md index f5e3df9..87fab10 100644 --- a/docs/mocap/rokoko.md +++ b/docs/mocap/rokoko.md @@ -8,11 +8,11 @@ Face & body tracking via [Rokoko](https://www.rokoko.com/). ## Setup -Open Rokoko Studio -> Livestream -> Activate streaming to Unity: +Open Rokoko Studio and enable **Livestream -> Activate streaming to Unity**: ![](pathname:///doc-img/zh-rokoko-1.webp) -Then, update Profile Name to the Rokoko actor whose motion capture data you would like to receive: +Then, update **Profile Name** in the **Rokoko Receiver** asset to the Rokoko actor whose motion capture data you would like to receive: ![](pathname:///doc-img/zh-rokoko-2.webp) @@ -24,10 +24,14 @@ Note: You need a Rokoko Plus or Pro subscription to stream to Unity. For more details, see the [official documentation](https://support.rokoko.com/hc/en-us/articles/4410471183633-Getting-Started-Streaming-to-Unity). ::: -## Properties +## Calibration -* Port: The port used by Rokoko Studio. The default is 14043. -* Profile Name: The name of the Rokoko Actor to receive. -* Eye Blink Sensitivity: The sensitivity of the eye blinking. -* Eye Movement Intensity: The range of eye movement. -* Adjust Hip Height Based On Studio Actor: Whether to adjust hip height based on studio actor. +Calibration of Rokoko hardware is done in Rokoko Studio. + +## Frequently Asked Questions + +Please refer to [Overview](overview#FAQ) and [Customizing Pose Tracking](pose-tracking#FAQ) for common questions. + +### My tracking drifts over time. + +This is a common issue with inertial motion capture systems, which drift over time due to accumulated errors. To reduce the drift, make sure there are no magnetic or electrical interference near the tracking sensors. diff --git a/docs/mocap/stretchsense.md b/docs/mocap/stretchsense.md new file mode 100644 index 0000000..1a5865d --- /dev/null +++ b/docs/mocap/stretchsense.md @@ -0,0 +1,21 @@ +--- +sidebar_position: 130 +--- + +# StretchSense Glove + +Finger tracking via [StretchSense Glove](https://stretchsense.com/). + +## Setup + +Warudo supports StretchSense Glove via the [VMC protocol](./vmc). To use StretchSense Glove with Warudo, select **StretchSense Glove** as **Secondary Pose Tracking** during the onboarding process. + +![](pathname:///doc-img/en-stretchsense-1.png) + +## Calibration + +Calibration of StretchSense Glove is done in StretchSense Hand Engine. + +## Frequently Asked Questions + +Please refer to [Overview](overview#FAQ) and [Customizing Pose Tracking](pose-tracking#FAQ) for common questions. diff --git a/docs/mocap/virdyn.md b/docs/mocap/virdyn.md new file mode 100644 index 0000000..e2c8404 --- /dev/null +++ b/docs/mocap/virdyn.md @@ -0,0 +1,25 @@ +--- +sidebar_position: 110 +--- + +# Virdyn Studio (VDMocapStudio) + +Body tracking via [VDMocapStudio](https://www.virdynm.com/virdyn-vdmocap-studio-motion-capture-software-system-for-vdsuit-full-product/). Requires a [VDSuit](https://www.virdynm.com/virdyn-vdsuit-full-for-full-body-function-inertia-motion-capture-suit-product/) suit. + +## Setup + +Open VDMocapStudio and enable data streaming by clicking **Broadcast -> OpenShare**. Make sure the IP selected is accessible from Warudo. + +![](pathname:///doc-img/en-virdyn-1.png) + +## Calibration + +Calibration of Virdyn hardware is done in VDMocapStudio. You can also use **Virdyn Studio Receiver -> Calibrate Root Transform** to reset the character's root position and rotation in Warudo. + +## Frequently Asked Questions + +Please refer to [Overview](overview#FAQ) and [Customizing Pose Tracking](pose-tracking#FAQ) for common questions. + +### My tracking drifts over time. + +This is a common issue with inertial motion capture systems, which drift over time due to accumulated errors. To reduce the drift, make sure there are no magnetic or electrical interference near the tracking sensors. diff --git a/docs/mocap/vmc.md b/docs/mocap/vmc.md index a684075..b956ffe 100644 --- a/docs/mocap/vmc.md +++ b/docs/mocap/vmc.md @@ -1,23 +1,47 @@ --- -sidebar_position: 80 +sidebar_position: 500 --- # VMC -Tracking via an external application, which sends data to Warudo via the [VirtualMotionCapture protocol](https://protocol.vmc.info/english). +Tracking via an external application, which sends data to Warudo via the [VirtualMotionCapture protocol](https://protocol.vmc.info/english). VR trackers are also supported using the paid (supporter) version of [VirtualMotionCapture](https://www.patreon.com/sh_akira). ## Setup +### VR Trackers + +First, make sure you have the paid (supporter) version of [**VirtualMotionCapture**](https://vmc.info/) installed. It can be downloaded from the developer's [Patreon](https://www.patreon.com/sh_akira) or [pixivFANBOX](https://akira.fanbox.cc/). + +:::tip +What is the difference between VirtualMotionCapture and VMC? Simply put, they are the same thing, but they can refer to two different things. One is the original VirtualMotionCapture application, one is the protocol that is used by VirtualMotionCapture and other applications. (A protocol is basically how applications communicate with each other.) In this handbook, we use "VMC" to refer to the protocol, and "VirtualMotionCapture" to refer to the application. +::: + +Make sure you have connected your VR trackers to SteamVR. Then, open VirtualMotionCapture; load your VRM model, and perform calibration. Finally, check **Enable OSC motion sender** in **Settings -> VMCProtocol Motion sender**. The default port 39539 is also the default port that Warudo's VMC receiver uses. + +![](pathname:///doc-img/en-vmc-1.png) + +### VSeeFace + The following example demonstrates how to apply VMC data to the character using [VSeeFace](https://www.vseeface.icu/). Other software that supports VMC can be configured in a similar manner. -Click on the settings in the upper right corner -> General settings, make sure "OSC/VMC Protocol" -> "Send datawith OSC/VMC protocol" is checked. The default IP is the localhost IP(127.0.0.1); the default port 39539 is also the default port that Warudo's VMC receiver uses, so if you want to send the motion capture data from VSeeFace to Warudo running on the same computer, you are already done! +In **General settings**, make sure **OSC/VMC Protocol -> Send datawith OSC/VMC protocol** is checked. The default port 39539 is also the default port that Warudo's VMC receiver uses. ![](pathname:///doc-img/zh-vmc-1.webp) -:::caution -This function requires that all initial rotations of the bones in the model be 0, otherwise the data sent to other software may not be compatible. If your model is in [VRM format](https://vrm.dev/), then in most cases there should be no issue. However, in very rare cases, the modeler may not have checked the Enforce T-Pose option when exporting, and it can be fixed by re-exporting with this option checked. -::: +### Other Applications + +For other applications that support VMC, you can check the documentation of the application to see how to enable VMC tracking. Make sure you are sending to the correct port (default 39539). + +## Calibration + +Calibration of VMC-compatible tracking systems is done in the external application. + +## Frequently Asked Questions + +Please refer to [Overview](overview#FAQ) and [Customizing Pose Tracking](pose-tracking#FAQ) for common questions. + +### My character is in extremely weird poses. -## Properties +First, make sure you have loaded the same model in Warudo and the external application. -* Port: The port that Warudo uses to receive data. +We also require the models to have normalized bones, i.e., all bones on the model should have zero rotation (0, 0, 0) when the model is in T-pose. If your model is in [VRM format](https://vrm.dev/), then in most cases there should be no issue. However, in very rare cases, the modeler may not have checked the Enforce T-Pose option when exporting, and it can be fixed by re-exporting with this option checked. Please refer to [this page](../misc/normalizing-model-bones) for more details. diff --git a/docs/mocap/xsens-mvn.md b/docs/mocap/xsens-mvn.md index 3450a76..629f932 100644 --- a/docs/mocap/xsens-mvn.md +++ b/docs/mocap/xsens-mvn.md @@ -8,10 +8,12 @@ Body tracking via [Xsens MVN Analyze/Animate](https://base.xsens.com/s/motion-ca ## Setup -Open Xsens MVN -> Options -> Network Streamer and configure as shown below. +Open **Xsens MVN -> Options -> Network Streamer** and configure as shown below. ![](pathname:///doc-img/zh-xens-1.webp) +Then, update **Actor ID** in the **Xsens MVN Receiver** asset to the Xsens Actor whose motion capture data you would like to receive. The default is 1. + :::caution Note: You need an MVN Animate Plus or Pro subscription to use the Network Streamer feature. ::: @@ -20,7 +22,14 @@ Note: You need an MVN Animate Plus or Pro subscription to use the Network Stream For more details, see the [official documentation](https://base.xsens.com/s/article/MVN-Unity-Live-Plugin?language=en\_US). ::: -## Properties +## Calibration + +Calibration of Xsens hardware is done in Xsens MVN. + +## Frequently Asked Questions + +Please refer to [Overview](overview#FAQ) and [Customizing Pose Tracking](pose-tracking#FAQ) for common questions. + +### My tracking drifts over time. -* Port: The port used by Xsens MVN (as shown in the above picture). The default is 9763. -* Actor ID: The ID of the Xsens Actor to receive. The default is 1. +This is a common issue with inertial motion capture systems, which drift over time due to accumulated errors. To reduce the drift, make sure there are no magnetic or electrical interference near the tracking sensors. diff --git a/docs/modding/_category_.json b/docs/modding/_category_.json index d41b3f3..8eeacfa 100644 --- a/docs/modding/_category_.json +++ b/docs/modding/_category_.json @@ -1,6 +1,6 @@ { "position": 50, - "label": "Extensions", + "label": "Modding", "collapsible": true, "collapsed": false -} \ No newline at end of file +} diff --git a/docs/scripting/_category_.json b/docs/scripting/_category_.json new file mode 100644 index 0000000..1e846a9 --- /dev/null +++ b/docs/scripting/_category_.json @@ -0,0 +1,6 @@ +{ + "position": 51, + "label": "Scripting", + "collapsible": true, + "collapsed": false +} diff --git a/docs/scripting/overview.md b/docs/scripting/overview.md new file mode 100644 index 0000000..beaad89 --- /dev/null +++ b/docs/scripting/overview.md @@ -0,0 +1,5 @@ +--- +sidebar_position: 0 +--- + +# Overview diff --git a/docs/tutorials/3d-primer.md b/docs/tutorials/3d-primer.md new file mode 100644 index 0000000..116920d --- /dev/null +++ b/docs/tutorials/3d-primer.md @@ -0,0 +1,234 @@ +--- +side-bar-position: 11 +--- + +# 3D VTubing Primer + +Below is a list of frequently asked questions about 3D VTubing; they are not specific to Warudo, but are still useful to know. + +## What is a blendshape? {#blendshape} + +A blendshape is a displacement of a number of vertices on the model mesh, like the following: + +![](pathname:///doc-img/zh-tutorials-18.gif) +Reference: https://developpaper.com/unity-realizes-facial-expression-transition-and-switching-animation-tutorial-through-blendshape/
+ +A blendshape's value is between 0 and 1. When the value is 0, the vertices do not move. When the value is 1, the vertices move to the target position, as shown below: + +![](pathname:///doc-img/zh-tutorials-19.gif) +Note that BlendShape in Unity takes values from 0-100, but in Warudo (and most 3D tools) BlendShape takes values from 0-1. Reference: https://developpaper.com/unity-realizes-facial-expression-transition-and-switching-animation-tutorial-through-blendshape/
+ +The list of blendshapes on a model is entirely up to the modeler (and the modeling tool). Below are a few lists of common blendshapes (for reference only; your model may have more or fewer blendshapes): + +Source:https://medium.com/unity3danimation/overview-of-inverse-kinematics-9769a43ba956
+ +## What are "Perfect Sync" / ARKit blendshapes? {#arkit} + + diff --git a/docs/tutorials/readme-1.md b/docs/tutorials/readme-1.md index 8a1f6ee..3f85d3a 100644 --- a/docs/tutorials/readme-1.md +++ b/docs/tutorials/readme-1.md @@ -22,7 +22,7 @@ Time to dive in! Click on **Basic Setup → Get Started**. ![](pathname:///doc-img/en-getting-started-3.png)Basic setup window.
-Click on **Open Characters Folder**, and you should see a folder named "Characters" in the file explorer. This is where you put your character models. +Click on **Open Characters Folder**, and you should see a folder named "Characters" opened in the file explorer. This is where you put your character models. Warudo supports two model formats: `.vrm` and `.warudo`. The former is a standard avatar format exported by software like [VRoid Studio](https://vroid.com/en/studio), while the latter is Warudo's own format and supports virtually any Unity-compatible character model ([details](../modding/character-mod.md)). For now, let's just put a `.vrm` file in the "Characters" folder. @@ -37,7 +37,7 @@ Now, go back to Warudo and open the dropdown. You should see your character's na ![](pathname:///doc-img/en-getting-started-4.png)Character loaded into the scene.
-Let's click **OK** to move on to the next step. Here, you are asked if you would like Warudo to recommend a motion capture setup for you. If you select "Yes," Warudo asks you a few questions to decide what face tracking and pose tracking solutions to use. For example, if you have an iPhone and a webcam, and you don't have Leap Motion and full-body tracking hardware, Warudo will recommend using [iFacialMocap](../mocap/ifacialmocap.md) (an iPhone app) for face tracking and [MediaPipe](../mocap/mediapipe.md) (a built-in tracking that uses your webcam) for hand tracking. If you select "No," then you will have to manually select what face tracking and pose tracking solutions to use. +Let's click **OK** to move on to the next step. Here, you are asked if you would like Warudo to recommend a motion capture setup for you. If you select "Yes," Warudo asks you a few questions to decide what face tracking and pose tracking solutions to use. For example, if you have an iPhone and a webcam, and you don't have a Leap Motion Controller or full-body tracking hardware, Warudo will recommend using [iFacialMocap](../mocap/ifacialmocap.md) (an iPhone app) for face tracking and [MediaPipe](../mocap/mediapipe.md) (a built-in tracking that uses your webcam) for hand tracking. If you select "No," then you will have to manually select what face tracking and pose tracking solutions to use. If you are a first-time user, we recommend selecting "Yes" to let Warudo recommend a motion capture setup for you. Click **OK**, and select the options that best describe your setup, and click **OK** again. @@ -53,7 +53,7 @@ In the next screen, you are asked to confirm the face tracking and pose tracking ![](pathname:///doc-img/en-getting-started-8.png)Confirm motion capture setup.
-:::info +:::tip If you want to set up a secondary tracking, you can use the **Secondary Pose Tracking** option; this is helpful if you are using VR trackers/Mocopi for primary tracking and would like to use [MediaPipe](../mocap/mediapipe.md) or [Leap Motion Controller](../mocap/leap-motion.md) or [StretchSense Gloves](../mocap/stretchsense.md) for added-on finger tracking. ::: @@ -101,7 +101,7 @@ You should see a **translation gizmo** under the model's feet that can be used t ![](pathname:///doc-img/en-getting-started-16.png)Move the model.
-:::info +:::tip Can't see the gizmo? Try use **Mouse MMB** to pan the view. ::: @@ -130,6 +130,12 @@ That's much better! ![](pathname:///doc-img/en-getting-started-21.png)Back to normal.
+:::tip +You can also switch between different gizmos in the editor. + +![](pathname:///doc-img/en-getting-started-66.png) +::: + In addition to orbiting around the model, the camera can also be switched to a free-look mode, allowing the camera to fly through the scene. Select the **Camera** asset and choose **Free Look** for the **Control Mode**: ![](pathname:///doc-img/en-getting-started-22.png) @@ -145,11 +151,11 @@ After you've gotten used to the two camera modes, switch back to **Orbit Charact ![](pathname:///doc-img/en-getting-started-23.png)Back to default camera position.
-## Assets Page +## Assets Tab Now you have a basic sense of how to move the character and the camera, let's take a look at the editor! -By default, you are at the **Assets** page. An **asset** is essentially anything that is "in the scene," such as a character, a camera, a prop, a motion capture tracker, etc. +By default, you are at the **Assets** tab. An **asset** is essentially anything that is "in the scene," such as a character, a camera, a prop, a motion capture tracker, etc. You can click on an asset to select it. For example, click on the **Directional Light** asset to select it and adjust its color: @@ -161,7 +167,7 @@ Spooky! ![](pathname:///doc-img/en-getting-started-26.png)Light color set to red.
-A motion capture tracker is also an asset. For example, since I have selected **iFacialMocap** as my face tracking solution, I have a **iFacialMocap Receiver** asset in my scene. I can click on it to select it and click the **Calibrate iFacialMocap** button to calibrate my face tracking: +A motion capture tracker is also an asset. For example, since I have selected **iFacialMocap** as my face tracking solution, I have a **iFacialMocap Receiver** asset in my scene. I can click on it to select it and click the **Calibrate** button to calibrate my face tracking: ![](pathname:///doc-img/en-getting-started-27.png)Calibrate iFacialMocap.
@@ -239,7 +245,7 @@ The best part is that **Warudo's motion capture blends seamlessly with the idle If you are new to Warudo, just play around with the options and see what they do. You can always reset the options to their default values by clicking the **Reset** button when hovering on them. The later sections of this handbook will explore each type of asset and their options in detail with examples. ::: -Let's now try to add a **Screen** asset. Click on **Add Asset** and select **Screen**: +Finally, let's try to add a **Screen** asset. Click on **Add Asset** and select **Screen**: ![](pathname:///doc-img/en-getting-started-38.png)Add screen asset.
@@ -249,16 +255,16 @@ The screen asset is used to display images and videos, but it can also capture y ![](pathname:///doc-img/en-getting-started-39.png)Screen usage example.
-:::info +:::tip If the screen is too bright, you can adjust the **Tint** option to a gray color to make it darker. ::: -## Blueprints Page +## Blueprints Tab -Now you are familiar with assets, let's take a look at the **Blueprints** page. Click on the sigma icon to switch to the blueprints page: +Now you are familiar with assets, let's take a look at the **Blueprints** tab. Click on the sigma icon to switch to the blueprints tab: ![](pathname:///doc-img/en-getting-started-40.png) -Switch to blueprints page.
+Switch to the blueprints tab.
But what is a blueprint? Essentially, blueprints are "what should happen when something happens." For example, "when I press the **Space** key, the character should wave their hand." Or, "when I receive a Twitch redeem, a prop should be thrown at my character." @@ -287,7 +293,7 @@ Let's go back to the two nodes that switch to the **Joy** expression. You can ch ![](pathname:///doc-img/en-getting-started-48.png)Change hotkey.
-You can also make more things happen when the hotkey is pressed. For example, let's play a sound as well! Type **play sound** in the search bar and drag the **Play Sound** node to the right of the **Toggle Character Expression** node: +You can also make more things happen when the hotkey is pressed. How about playing a sound effect? Type **play sound** in the search bar and drag the **Play Sound** node to the right of the **Toggle Character Expression** node: ![](pathname:///doc-img/en-getting-started-49.png)Locate the "Play Sound" node.
@@ -302,11 +308,11 @@ This connects the two nodes together. Now, click on the **Source** dropdown menu ![](pathname:///doc-img/en-getting-started-51.png)Select a sound effect.
-:::info +:::tip You can click **Open Sounds Folder** and put in your own sound effects! ::: -Now, when you press **Ctrl+Shift+Q** (or the hotkey you assigned), your character should switch to the **Joy** expression, and a sound effect should play. +Now, when you press **Ctrl+Shift+Q** (or the hotkey you assigned), your character should switch to the **Joy** expression, and you should hear a sound effect! Let's implement something new. For example, let's play an emote animation on the character when we press **Space**. Pan to an empty area and try to add the following nodes yourself and connect them together as shown. Select the character asset in the **Character** dropdown menu, and select **Energetic** for the **Animation** option: @@ -324,7 +330,15 @@ Now, when you press **Space**, your character should play the **Energetic** emotCharacter plays the "Energetic" emote animation.
-That's our brief introduction to blueprints! Hopefully it offers you a glimpse of what blueprints can do. Warudo's blueprint system is a very powerful feature, but even without touching it, you can already do a lot of things with Warudo. Once you are familiar with the rest of Warudo's features, we recommend reading the [blueprint tutorial](/docs/blueprints/overview) to learn more about blueprints. +That's our brief introduction to blueprints! Hopefully it offers you a glimpse of what blueprints can do. Blueprints are a very powerful feature, but even without touching them at all, you can still do a lot of things with Warudo. Once you are familiar with the rest of the features, we recommend reading the [blueprint tutorial](/docs/blueprints/overview) to learn more about blueprints. + +:::tip +You can use **Mouse RMB** to drag a selection box to select multiple nodes. To remove a node, select it and press **Delete**. +::: + +:::info +You may have noticed there is a blueprint for face tracking and a blueprint for pose tracking. These blueprints were automatically generated by the onboarding assistant and can be customized to your liking. For beginners though, they could look intimidating, so we recommend leaving them alone for now. +::: ## Interaction Setup @@ -343,7 +357,7 @@ Follow the instructions to set up the streaming platform integration. For exampl ![](pathname:///doc-img/en-getting-started-60.png)Set up integration.
-When you are done, you will notice a new blueprint has been added to the blueprints page. Remember what we said about blueprints? They are "what should happen when something happens." In this case, the blueprint should contain the logic for launching liquid at our character when we receive a Twitch redeem called "Water." Click on the blueprint to select it, and locate the **Launch Liquid At Character** node: +When you are done, you will notice a new blueprint has been added to the blueprints tab. Remember what we said about blueprints? They are "what should happen when something happens." In this case, the blueprint should contain the logic for launching liquid at our character when we receive a Twitch redeem called "Water." Click on the blueprint to select it, and locate the **Launch Liquid At Character** node: ![](pathname:///doc-img/en-getting-started-61.png)A new blueprint has been added.
@@ -351,7 +365,7 @@ When you are done, you will notice a new blueprint has been added to the bluepri ![](pathname:///doc-img/en-getting-started-62.png)Locate the "Launch Liquid At Character" node.
-The nodes above are essentially saying: "when we receive a Twitch redeem, if the redeem title is 'Water', then launch liquid at the character asset 'Character'." You can click the "Enter" button on the **Launch Liquid At Character** node to test it out: +Indeed, the nodes above are essentially saying: "when we receive a Twitch redeem, if the redeem title is 'Water', then launch liquid at the character asset 'Character'." You can click the "Enter" button on the **Launch Liquid At Character** node to test it out: ![](pathname:///doc-img/en-getting-started-63.png)Test the interaction.
@@ -368,7 +382,7 @@ It's time to put Warudo into your favorite streaming software! There are four wa * **Spout:** Warudo can output as a [Spout](https://spout.zeal.co/) source. To use this method, go to **Settings → Output** and enable **Spout Output**. **This is by default already enabled.** * **Window/Game capture:** You can also use window/game capture in OBS Studio or Streamlabs to capture the Warudo window. We do not recommend this method, as it takes up significantly more CPU/GPU resources than the other methods. -We recommend using the **Spout** method, as it has zero latency and best performance. To set up: +We recommend using the **Spout** method, as it has zero latency and best performance. Furthermore, it hides the transform gizmos, so you can move/rotate characters or props during a stream without showing the gizmos to the audience! To set up Spout output, follow these steps: * **OBS Studio:** Install the [**Spout2 Plugin for OBS Studio**](https://github.com/Off-World-Live/obs-spout2-plugin) first, then add a **Spout2 Capture** source, and select **Warudo** as the sender. * **Streamlabs:** Add a **Spout2** source, and select **Warudo** as the sender. @@ -376,6 +390,10 @@ We recommend using the **Spout** method, as it has zero latency and best perform ![](pathname:///doc-img/en-getting-started-65.png)Spout output to OBS Studio.
+:::tip +All output methods support transparency output. To enable transparent background, simply go to the camera asset and enable **Output → Transparent Background**. +::: + ## Saving and Loading Scenes Now that you have set up your scene, remember to save it! Unlike other software, Warudo does not automatically save your scene, so that you can experiment with different settings without worrying about messing up your scene. Click on the Warudo icon, and select **Save Scene**: diff --git a/static/doc-img/en-assets-1.png b/static/doc-img/en-assets-1.png new file mode 100644 index 0000000..35cc3ab Binary files /dev/null and b/static/doc-img/en-assets-1.png differ diff --git a/static/doc-img/en-character-1.png b/static/doc-img/en-character-1.png new file mode 100644 index 0000000..97202fd Binary files /dev/null and b/static/doc-img/en-character-1.png differ diff --git a/static/doc-img/en-character-10.png b/static/doc-img/en-character-10.png new file mode 100644 index 0000000..2c828aa Binary files /dev/null and b/static/doc-img/en-character-10.png differ diff --git a/static/doc-img/en-character-11.png b/static/doc-img/en-character-11.png new file mode 100644 index 0000000..bd353ef Binary files /dev/null and b/static/doc-img/en-character-11.png differ diff --git a/static/doc-img/en-character-2.png b/static/doc-img/en-character-2.png new file mode 100644 index 0000000..75fc4d0 Binary files /dev/null and b/static/doc-img/en-character-2.png differ diff --git a/static/doc-img/en-character-3.png b/static/doc-img/en-character-3.png new file mode 100644 index 0000000..44cc9f6 Binary files /dev/null and b/static/doc-img/en-character-3.png differ diff --git a/static/doc-img/en-character-4.png b/static/doc-img/en-character-4.png new file mode 100644 index 0000000..d43bfb7 Binary files /dev/null and b/static/doc-img/en-character-4.png differ diff --git a/static/doc-img/en-character-5.png b/static/doc-img/en-character-5.png new file mode 100644 index 0000000..86d79cf Binary files /dev/null and b/static/doc-img/en-character-5.png differ diff --git a/static/doc-img/en-character-6.png b/static/doc-img/en-character-6.png new file mode 100644 index 0000000..65ecfc8 Binary files /dev/null and b/static/doc-img/en-character-6.png differ diff --git a/static/doc-img/en-character-7.png b/static/doc-img/en-character-7.png new file mode 100644 index 0000000..421c7e4 Binary files /dev/null and b/static/doc-img/en-character-7.png differ diff --git a/static/doc-img/en-character-8.png b/static/doc-img/en-character-8.png new file mode 100644 index 0000000..75d19fa Binary files /dev/null and b/static/doc-img/en-character-8.png differ diff --git a/static/doc-img/en-character-9.png b/static/doc-img/en-character-9.png new file mode 100644 index 0000000..5fd38a7 Binary files /dev/null and b/static/doc-img/en-character-9.png differ diff --git a/static/doc-img/en-chingmu-1.png b/static/doc-img/en-chingmu-1.png new file mode 100644 index 0000000..f97475a Binary files /dev/null and b/static/doc-img/en-chingmu-1.png differ diff --git a/static/doc-img/en-getting-started-66.png b/static/doc-img/en-getting-started-66.png new file mode 100644 index 0000000..9e30c5f Binary files /dev/null and b/static/doc-img/en-getting-started-66.png differ diff --git a/static/doc-img/en-ifacialmocap-1.png b/static/doc-img/en-ifacialmocap-1.png new file mode 100644 index 0000000..922f3ec Binary files /dev/null and b/static/doc-img/en-ifacialmocap-1.png differ diff --git a/static/doc-img/en-leapmotion-1.png b/static/doc-img/en-leapmotion-1.png new file mode 100644 index 0000000..f925c0b Binary files /dev/null and b/static/doc-img/en-leapmotion-1.png differ diff --git a/static/doc-img/en-mediapipe-1.png b/static/doc-img/en-mediapipe-1.png new file mode 100644 index 0000000..ec98570 Binary files /dev/null and b/static/doc-img/en-mediapipe-1.png differ diff --git a/static/doc-img/en-mediapipe-2.png b/static/doc-img/en-mediapipe-2.png new file mode 100644 index 0000000..27d785c Binary files /dev/null and b/static/doc-img/en-mediapipe-2.png differ diff --git a/static/doc-img/en-mocap-1.png b/static/doc-img/en-mocap-1.png new file mode 100644 index 0000000..8b18d4b Binary files /dev/null and b/static/doc-img/en-mocap-1.png differ diff --git a/static/doc-img/en-mocap-2.png b/static/doc-img/en-mocap-2.png new file mode 100644 index 0000000..5476ca7 Binary files /dev/null and b/static/doc-img/en-mocap-2.png differ diff --git a/static/doc-img/en-mocap-3.png b/static/doc-img/en-mocap-3.png new file mode 100644 index 0000000..ac162a0 Binary files /dev/null and b/static/doc-img/en-mocap-3.png differ diff --git a/static/doc-img/en-mocap-4.png b/static/doc-img/en-mocap-4.png new file mode 100644 index 0000000..7cb4df4 Binary files /dev/null and b/static/doc-img/en-mocap-4.png differ diff --git a/static/doc-img/en-mocap-5.png b/static/doc-img/en-mocap-5.png new file mode 100644 index 0000000..92c4199 Binary files /dev/null and b/static/doc-img/en-mocap-5.png differ diff --git a/static/doc-img/en-mocap-6.png b/static/doc-img/en-mocap-6.png new file mode 100644 index 0000000..e81ad86 Binary files /dev/null and b/static/doc-img/en-mocap-6.png differ diff --git a/static/doc-img/en-mocopi-1.png b/static/doc-img/en-mocopi-1.png new file mode 100644 index 0000000..1d11ecb Binary files /dev/null and b/static/doc-img/en-mocopi-1.png differ diff --git a/static/doc-img/en-motionbuilder-1.png b/static/doc-img/en-motionbuilder-1.png new file mode 100644 index 0000000..c360ba3 Binary files /dev/null and b/static/doc-img/en-motionbuilder-1.png differ diff --git a/static/doc-img/en-motionbuilder-2.png b/static/doc-img/en-motionbuilder-2.png new file mode 100644 index 0000000..70aa1a8 Binary files /dev/null and b/static/doc-img/en-motionbuilder-2.png differ diff --git a/static/doc-img/en-noitom-1.png b/static/doc-img/en-noitom-1.png new file mode 100644 index 0000000..310dbac Binary files /dev/null and b/static/doc-img/en-noitom-1.png differ diff --git a/static/doc-img/en-openseeface-1.png b/static/doc-img/en-openseeface-1.png new file mode 100644 index 0000000..c8fe3e4 Binary files /dev/null and b/static/doc-img/en-openseeface-1.png differ diff --git a/static/doc-img/en-stretchsense-1.png b/static/doc-img/en-stretchsense-1.png new file mode 100644 index 0000000..2407394 Binary files /dev/null and b/static/doc-img/en-stretchsense-1.png differ diff --git a/static/doc-img/en-virdyn-1.png b/static/doc-img/en-virdyn-1.png new file mode 100644 index 0000000..c99a0f8 Binary files /dev/null and b/static/doc-img/en-virdyn-1.png differ diff --git a/static/doc-img/en-vmc-1.png b/static/doc-img/en-vmc-1.png new file mode 100644 index 0000000..4e89521 Binary files /dev/null and b/static/doc-img/en-vmc-1.png differ diff --git a/static/doc-img/zh-keyboard-1.webp b/static/doc-img/zh-keyboard-1.webp index 404dece..5ce1f8a 100644 Binary files a/static/doc-img/zh-keyboard-1.webp and b/static/doc-img/zh-keyboard-1.webp differ diff --git a/static/doc-img/zh-keyboard-3.webp b/static/doc-img/zh-keyboard-3.webp index 81d763e..be8af3d 100644 Binary files a/static/doc-img/zh-keyboard-3.webp and b/static/doc-img/zh-keyboard-3.webp differ