You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I try to runpython main.py --input_dir input/face --im_path1 90.png --im_path2 15.png --im_path3 117.png --sign realistic --smooth 5, it reports an error:
No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'
Traceback (most recent call last):
File "main.py", line 13, in
from models.Embedding import Embedding
File "/root/autodl-tmp/Barbershop/models/Embedding.py", line 3, in
from models.Net import Net
File "/root/autodl-tmp/Barbershop/models/Net.py", line 3, in
from models.stylegan2.model import Generator
File "/root/autodl-tmp/Barbershop/models/stylegan2/model.py", line 11, in
from models.stylegan2.op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d
File "/root/autodl-tmp/Barbershop/models/stylegan2/op/init.py", line 1, in
from .fused_act import FusedLeakyReLU, fused_leaky_relu
File "/root/autodl-tmp/Barbershop/models/stylegan2/op/fused_act.py", line 14, in
os.path.join(module_path, "fused_bias_act_kernel.cu"),
File "/root/miniconda3/envs/Brshop/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1296, in load
keep_intermediates=keep_intermediates)
File "/root/miniconda3/envs/Brshop/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1518, in _jit_compile
is_standalone=is_standalone)
File "/root/miniconda3/envs/Brshop/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1619, in _write_ninja_file_and_build_library
is_standalone=is_standalone)
File "/root/miniconda3/envs/Brshop/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 2014, in _write_ninja_file_to_build_library
cuda_flags = common_cflags + COMMON_NVCC_FLAGS + _get_cuda_arch_flags()
File "/root/miniconda3/envs/Brshop/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1780, in _get_cuda_arch_flags
arch_list[-1] += '+PTX'
IndexError: list index out of range
Can someone help me with that, please?
The text was updated successfully, but these errors were encountered:
@MajidShafaee Hello! I give up using this project. In fact, I think that depending on my needs, I can find a hairstyle that suits my needs and give it a face swap. Here's another way of thinking. If you're interested in face swapping, you can try: https://github.com/facefusion/facefusion
Problem description:
When I try to run
python main.py --input_dir input/face --im_path1 90.png --im_path2 15.png --im_path3 117.png --sign realistic --smooth 5
, it reports an error:No CUDA runtime is found, using CUDA_HOME='/usr/local/cuda'
Traceback (most recent call last):
File "main.py", line 13, in
from models.Embedding import Embedding
File "/root/autodl-tmp/Barbershop/models/Embedding.py", line 3, in
from models.Net import Net
File "/root/autodl-tmp/Barbershop/models/Net.py", line 3, in
from models.stylegan2.model import Generator
File "/root/autodl-tmp/Barbershop/models/stylegan2/model.py", line 11, in
from models.stylegan2.op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d
File "/root/autodl-tmp/Barbershop/models/stylegan2/op/init.py", line 1, in
from .fused_act import FusedLeakyReLU, fused_leaky_relu
File "/root/autodl-tmp/Barbershop/models/stylegan2/op/fused_act.py", line 14, in
os.path.join(module_path, "fused_bias_act_kernel.cu"),
File "/root/miniconda3/envs/Brshop/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1296, in load
keep_intermediates=keep_intermediates)
File "/root/miniconda3/envs/Brshop/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1518, in _jit_compile
is_standalone=is_standalone)
File "/root/miniconda3/envs/Brshop/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1619, in _write_ninja_file_and_build_library
is_standalone=is_standalone)
File "/root/miniconda3/envs/Brshop/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 2014, in _write_ninja_file_to_build_library
cuda_flags = common_cflags + COMMON_NVCC_FLAGS + _get_cuda_arch_flags()
File "/root/miniconda3/envs/Brshop/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1780, in _get_cuda_arch_flags
arch_list[-1] += '+PTX'
IndexError: list index out of range
Can someone help me with that, please?
The text was updated successfully, but these errors were encountered: