Skip to content

Commit

Permalink
Moving...(4)
Browse files Browse the repository at this point in the history
  • Loading branch information
Nekotora committed Sep 30, 2023
1 parent e0341fe commit 5c29e08
Show file tree
Hide file tree
Showing 365 changed files with 246 additions and 103 deletions.
1 change: 0 additions & 1 deletion docs/assets/character/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
---
sidebar_position: 10
sidebar_label: Character
---

# Character
Expand Down
60 changes: 31 additions & 29 deletions docs/blueprints/blueprints-intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,19 +53,17 @@ Here comes the problem—our flowchart made some key assumptions:

Are these assumptions necessarily valid?

* <i style={{color: "#423da8"}}>The model has been adapted to the VRM format, so there are two blendshapes named `Blink` and `A`.</i>
* _The model has been adapted to the VRM format, so there are two blendshapes named `Blink` and `A`._

* Obviously, if your model hasn't been adapted to the VRM format (for example, it's a VRChat model or directly converted from a MMD model), then these two blendshapes won't be present.
Obviously, if your model hasn't been adapted to the VRM format (for example, it's a VRChat model or directly converted from a MMD model), then these two blendshapes won't be present.

<br />
* _The value of a face tracking data point matches the value of its corresponding blendshape all the time._

* <i style={{color: "#423da8"}}>The value of a face tracking data point matches the value of its corresponding blendshape all the time.</i>
* We assumed that when a person closes their eyes, the `eyes opened/closed` data should be 1, and when they open their eyes, the `eyes opened/closed` data should be 0. However, this is not always the case. Keep in mind that the results of motion capture can be unreliable, especially for consumer-grade motion capture systems. For instance, if you have small eyes, the system may recognize them as only partially open even though you keep your eyes wide open. So instead of a range of 0-1, the actual range for you could be 0.2-0.7.
We assumed that when a person closes their eyes, the `eyes opened/closed` data should be 1, and when they open their eyes, the `eyes opened/closed` data should be 0. However, this is not always the case. Keep in mind that the results of motion capture can be unreliable, especially for consumer-grade motion capture systems. For instance, if you have small eyes, the system may recognize them as only partially open even though you keep your eyes wide open. So instead of a range of 0-1, the actual range for you could be 0.2-0.7.

<br />

* <i style={{color: "#423da8"}}>The `Blink` blendshape should be set to 1 when the person blinks, and the `A` blendshape should be set to 1 when the person opens their mouth.</i>
* Isn't this always true? Not necessarily. Suppose my model has activated the `Smile` expression:
* _The `Blink` blendshape should be set to 1 when the person blinks, and the `A` blendshape should be set to 1 when the person opens their mouth._

Isn't this always true? Not necessarily. Suppose my model has activated the `Smile` expression:

![Cute!](/doc-img/en-blueprints-intro-5.webp)
<p class="img-desc">Cute!</p>
Expand All @@ -81,24 +79,28 @@ The reason is straightforward: the `Blink` blendshape pulls down the model's upp

So, what can we do? Let's take a look at how it's been handled in the past...

<br />

* <i style={{color: "#423da8"}}>It's possible that the models haven't been adapted to the VRM format, meaning that there won't be `Blink` and `A` blendshapes present.</i>
* **Most software:** Adapt your model to the VRM format please. Non-VRM models are simply unsupported.\
* **Self-developed/In-house:** Since the devs wrote and own the code, they can simply modify it to use the correct blendshape instead of `Blink`. For example, if using an MMD model, you could replace `Blink` with `まばたき`.
* _It's possible that the models haven't been adapted to the VRM format, meaning that there won't be `Blink` and `A` blendshapes present._

**Most software:** Adapt your model to the VRM format please. Non-VRM models are simply unsupported.

**Self-developed/In-house:** Since the devs wrote and own the code, they can simply modify it to use the correct blendshape instead of `Blink`. For example, if using an MMD model, you could replace `Blink` with `まばたき`.


* _It's possible that the value of the motion capture data should not match the value of its corresponding blendshape._

<br />
**Most software:** There are tools that allow for sensitivity adjustments to specific blendshapes, such as [VSeeFace](https://www.vseeface.icu/), which can adjust the blink sensitivity, or [RhyLive (Windows)](https://rhythmo.cn/rhylive/) which allows for adjustments to the sensitivity of each ARKit blendshape. However, these adjustments are usually simple multipliers that make it easier to trigger a blendshape (e.g., closing the eyes). Even a simple linear mapping from mocap data to blendshape values like [VTube Studio](https://denchisoft.com/) is unsupported in most 3D VTuber software:

* <i style={{color: "#423da8"}}>It's possible that the value of the motion capture data should not match the value of its corresponding blendshape.</i>
* **Most software:** There are tools that allow for sensitivity adjustments to specific blendshapes, such as [VSeeFace](https://www.vseeface.icu/), which can adjust the blink sensitivity, or [RhyLive (Windows)](https://rhythmo.cn/rhylive/) which allows for adjustments to the sensitivity of each ARKit blendshape. However, these adjustments are usually simple multipliers that make it easier to trigger a blendshape (e.g., closing the eyes). Even a simple linear mapping from mocap data to blendshape values like [VTube Studio](https://denchisoft.com/) is unsupported in most 3D VTuber software:
![](/doc-img/en-blueprints-intro-7.webp)
* **Self-developed/In-house:** Since the devs wrote and own the code, they can manually adjust the values to correct for any discrepancies. They need to do this for every user though.

<br />
**Self-developed/In-house:** Since the devs wrote and own the code, they can manually adjust the values to correct for any discrepancies. They need to do this for every user though.

* <i style={{color: "#423da8"}}>The values of certain blendshapes may need to be constrained based on the activation of other blendshapes. For instance, when the `Smile` blendshape is activated, the value of the `Blink` blendshape should be constantly 0, even if the_ user _blinks their eyes._</i>
* **Most software:** There are tools that allow for constraints to be added to specific blendshapes when activating an expression, such as [Luppet](https://luppet.appspot.com/), which allows you to control whether or not blinking is allowed. However, most software only supports constraining blendshapes defined by VRM; other blendshapes are often unsupported, which can be a problem for custom models that often have many of them.\
* **Self-developed/In-house:** Again, since the devs wrote and own the code, they can manually adjust the values to correct for any discrepancies. They really should have come up with some automation like a config UI at this point though.

* _The values of certain blendshapes may need to be constrained based on the activation of other blendshapes. For instance, when the `Smile` blendshape is activated, the value of the `Blink` blendshape should be constantly 0, even if the_ user _blinks their eyes.__

**Most software:** There are tools that allow for constraints to be added to specific blendshapes when activating an expression, such as [Luppet](https://luppet.appspot.com/), which allows you to control whether or not blinking is allowed. However, most software only supports constraining blendshapes defined by VRM; other blendshapes are often unsupported, which can be a problem for custom models that often have many of them.

**Self-developed/In-house:** Again, since the devs wrote and own the code, they can manually adjust the values to correct for any discrepancies. They really should have come up with some automation like a config UI at this point though.

By now, you may have realized that even though our requirements seem simple (just tracking the eyes and mouth!), adapting to all scenarios is still challenging. Most publicly available 3D VTuber software has significant limitations in this area; while an in-house solution seems like it could solve all problems, the vast majority of self-developed projects do not consider customization and are only adapted for a single model and user. Thus, even simple changes in requirements, such as adjusting motion capture sensitivity, require updates to the program. And more complex requirements, such as changing scenes, character accessories, or adding special effects, add even more complexity. The technical demands of building a VTuber software from scratch are also a significant obstacle for many VTubers aspiring to stream in 3D.

Expand Down Expand Up @@ -181,19 +183,19 @@ No more weird eyelids!

Let's revisit our concerns earlier:

* <i style={{color: "#423da8"}}>It's possible that the models haven't been adapted to the VRM format, meaning that there won't be `Blink` and `A` blendshapes present.</i>
* **Warudo:** Doesn't force you to stick to specific blendshape names. Just use what's available on your model.
* _It's possible that the models haven't been adapted to the VRM format, meaning that there won't be `Blink` and `A` blendshapes present._

**Warudo:** Doesn't force you to stick to specific blendshape names. Just use what's available on your model.


<br />
* _It's possible that the value of the motion capture data should not match the value of its corresponding blendshape._

* <i style={{color: "#423da8"}}>It's possible that the value of the motion capture data should not match the value of its corresponding blendshape.</i>
* **Warudo:** You have the freedom to manipulate the motion capture data as you please!
**Warudo:** You have the freedom to manipulate the motion capture data as you please!

<br />

* <i style={{color: "#423da8"}}>The values of certain blendshapes may need to be constrained based on the activation of other blendshapes. For instance, when the `Smile` blendshape is activated, the value of the `Blink` blendshape should be constantly 0, even if the_ user _blinks their eyes.</i>
* _The values of certain blendshapes may need to be constrained based on the activation of other blendshapes. For instance, when the `Smile` blendshape is activated, the value of the `Blink` blendshape should be constantly 0, even if the_ user _blinks their eyes._

* **Warudo:** Simply add a `Constraint BlendShape` node.
**Warudo:** Simply add a `Constraint BlendShape` node.

Everything seems to be solved! Of course, you don't have to build blueprints from scratch like above: as shown [in the Getting Started tutorial](../tutorials/readme-1.md), Warudo allows you to generate blueprints with a single click that match your model's specific setup. We showed how to map motion capture data (such as puffing your face) to a specific blendshape—but this is just one of the thousands of possibilities. For example, the following blueprint can be used to change the model's idle animation and expression when you press Alt+C.

Expand Down
30 changes: 0 additions & 30 deletions docs/misc/dai-ji-dong-hua-yi-lan.md
Original file line number Diff line number Diff line change
@@ -1,33 +1,3 @@
# Idle Animations at a Glance

Since Warudo's configuration interface does not support displaying images for now, and there are more than 500 default idle animations, you can use this page to find the idle animation you want, and select the number (e.g. `010_0010`) corresponding to the option in "Animation" -> "Default Idle Animation" of your character.

<div>

<figure><img src="/images/010-0010.jpg" alt="" /><figcaption><p>010_0010</p></figcaption></figure>



<figure><img src="/images/010-0020.jpg" alt="" /><figcaption></figcaption></figure>



<figure><img src="/images/010-0030.jpg" alt="" /><figcaption></figcaption></figure>



<figure><img src="/images/010-0040.jpg" alt="" /><figcaption></figcaption></figure>



<figure><img src="/images/010-0050.jpg" alt="" /><figcaption></figcaption></figure>



<figure><img src="/images/010-0060.jpg" alt="" /><figcaption></figcaption></figure>



<figure><img src="/images/010-0070.jpg" alt="" /><figcaption></figcaption></figure>

</div>
19 changes: 12 additions & 7 deletions docs/modding/mod-sdk.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,18 +61,23 @@ Return to Unity and wait for the project to reload; make sure there are no error
:::caution
If you encounter an error with the message `An error occurred while resolving packages / Error adding package` and clicking on it reveals a message similar to `No 'git' executable was found. Please install Git on your system then restart Unity and Unity Hub`, it means that Git is not installed on your system.

![](</images/image(8)(1)(1)(1).jpg>)\
\
![](/doc-img/en-mod-sdk-1.webp)

To resolve this issue, you need to download Git from [https://git-scm.com/download](https://git-scm.com/download) and then restart both Unity and Unity Hub.
:::

Confirm that the "Api Compatibility Level" in "File -> Build Settings... -> Player Settings... -> Other Settings" is set to .NET Framework.

<figure><img src="/images/image(40).jpg" alt="" /><figcaption></figcaption></figure>
![](/doc-img/en-mod-sdk-2.webp)

Download the SDK and import it into your Unity project, either by creating a new project or using an existing one.

{% file src="/images/WarudoSDK 0.10.0.unitypackage" %}
<a href="/sdk/WarudoSDK-0.10.0.unitypackage" target="_blank">
<div className="file-box">
<p>
WarudoSDK-0.10.0.unitypackage
</p></div>
</a>

:::caution
If you are importing into an **existing** project, and you have any of the following installed:
Expand Down Expand Up @@ -103,15 +108,15 @@ If you see some errors, keep in mind that some "errors" are really just warnings

To create a new Mod, go to the menu bar and select "Warudo" -> "New Mod":

![](https://user-images.githubusercontent.com/3406505/181208455-9ab46a52-4edd-401c-807e-2d2d6ae24eec.png)
![](/doc-img/en-mod-sdk-3.webp)

Give your mod a name, and click "Create Mod!" to create it:

![](https://user-images.githubusercontent.com/3406505/181208739-8916bccd-a669-4f48-aa41-3baf61670ef4.png)
![](/doc-img/en-mod-sdk-4.webp)

You should be able to see that a folder for your mod has been created under the Assets folder:

![](https://user-images.githubusercontent.com/3406505/181209065-a63e4ba1-005a-45d3-853c-3aa4013f66a5.png)
![](/doc-img/en-mod-sdk-5.webp)

Now you can start creating mods! How does a [prop mod](prop-mod.md) sound?

Expand Down
2 changes: 1 addition & 1 deletion i18n/zh/docusaurus-plugin-content-docs/current/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Warudo 是一款专为 VTuber 直播设计的虚拟形象动画软件,可以
Warudo 力求做到上述几种方案的平衡点:

* **针对家用 3D 优化:**[一台 iPhone 即可面捕 + 上半身动捕](mocap/rhylive.md),也提供[摄像头动捕方案](mocap/mediapipe.md)。内置 [500+ 个待机动画](assets/character/#dong-hua),配合 [IK 功能](assets/character/#shen-ti-ik),可以让你的虚拟形象摆出任何想要的 Pose。
* **功能细致:**上至[动捕 + 动画的无缝融合](assets/character/#dong-hua)[多摄像机输出](assets/camera.md#duo-she-xiang-ji)、同类软件中最强大的[面捕映射及表情切换系统](assets/character/blendshape-expression.md),下至[动捕数据的平滑幅度](advanced/blueprints-intro.md)[摄像机的 LUT 材质](assets/camera.md#se-tiao-ying-she-he-yan-se-fen-ji),丰富的配置选项让你的虚拟形象动画更自然、也更具表现力。
* **功能细致:**上至[动捕 + 动画的无缝融合](assets/character/#dong-hua)[多摄像机输出](assets/camera.md#duo-she-xiang-ji)、同类软件中最强大的[面捕映射及表情切换系统](assets/character/blendshape-expression.md),下至[动捕数据的平滑幅度](blueprints/blueprints-intro.md)[摄像机的 LUT 材质](assets/camera.md#se-tiao-ying-she-he-yan-se-fen-ji),丰富的配置选项让你的虚拟形象动画更自然、也更具表现力。
* **简洁易用:**提供简洁但功能强大的配置界面,与渲染窗口分离,甚至可以在其他电脑 / 手机上远程控制。详细的使用文档,提供常见使用场景的配置示例。
* **模块化配置:**角色模型、环境模型、摄像机配置甚至互动逻辑等都保存在 Warudo 的场景文件内,可一键切换不同直播场景,也方便开发 / 运营人员调试好场景后发给 VTuber 直接使用。
* **易于拓展:**可导入[外部 3D 资源](modding/mod-sdk.md),支持 [Steam 创意工坊](https://steamcommunity.com/app/2079740/workshop/),兼容 [URP 渲染管线 / NiloToonURP](https://github.com/ColinLeung-NiloCat/UnityURPToonLitShaderExample);提供 C# SDK,可自行开发或拓展功能,支持运行时热编译加载 C# 脚本调试。
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
---
sidebar_position: 90
---

# 锚点

锚点是场景空间中的某个绝对点或相对点,可作为[角色视线 IK](character/#look-ik)[身体 IK](character/#body-ik) 的目标。
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
---
sidebar_position: 50
---

# 摄像机

摄像机就是……摄像机!
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
---
sidebar_position: 10
---

# 角色

Warudo 原生支持导入 [VRM 格式](https://vrm.dev/en/univrm/)的角色模型。如果你的模型格式不是 VRM(例如是 MMD 或者 VRChat 专用的模型),可以使用 [Mod SDK](https://tiger-tang.gitbook.io/warudo/advanced/sdk) 导出为 `.warudo` 格式,即可加载到 Warudo 中。
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,8 @@
---
sidebar_position: 10
---


# 表情

* 导入 VRM 表情:如果模型是 VRM 格式,导入模型内置的所有 VRM [BlendShapeClip ](https://vrm.dev/en/univrm/blendshape/univrm\_blendshape.html)作为表情。
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
---
sidebar_position: 70
---

# 环境

即所谓的「3D 背景」。Warudo 中的环境可以是**任意** Unity 场景;只要使用 [Mod SDK](https://tiger-tang.gitbook.io/warudo/advanced/sdk) 导出为 `.warudo` 格式,即可加载到 Warudo 中。
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
---
sidebar_position: 30
---

# 键盘 / 触控板

键盘和触控板是特殊的道具,可以在没有捕捉到手部的情况下生成角色使用键盘和触控板的动画。
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
---
sidebar_position: 80
---

# 光源

场景内的光源,目前支持平行光源和点光源。
Expand Down
4 changes: 4 additions & 0 deletions i18n/zh/docusaurus-plugin-content-docs/current/assets/prop.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
---
sidebar_position: 20
---

# 道具

道具是场景内自由摆放的任意 3D 模型;也可以绑定在角色骨骼上,实现配饰的效果。除了自带的模型外,外部的模型可以使用 [Mod SDK](https://tiger-tang.gitbook.io/warudo/advanced/sdk) 导出为 `.warudo` 格式,即可加载到 Warudo 中。
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
---
sidebar_position: 100
---

# 动作录制器(导出 FBX / BVH)

Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
---
sidebar_position: 60
---

# 屏幕

像 OBS 一样实时捕捉桌面或者特定窗口,在场景内显示。还可以显示图片甚至网页哦!
Expand Down
5 changes: 5 additions & 0 deletions i18n/zh/docusaurus-plugin-content-docs/current/assets/tail.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,8 @@
---
sidebar_position: 40
---


# 尾巴

让角色的尾巴晃动起来!
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
---
sidebar_position: 110
---

# VMC 发送器

发送角色的动画数据到任何支持 [VirtualMotionCapture 协议](https://protocol.vmc.info/english)的软件!
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
---
sidebar_position: 40
---

# 进阶节点

### 调试
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
---
sidebar_position: 30
---

# 基础节点

### 事件
Expand Down
Loading

0 comments on commit 5c29e08

Please sign in to comment.