Empty module name at to_hls.InferStreamingMaxPool operation #574
Unanswered
whitelambs
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Dear FINN team,
I have a custom CNV model with different dimension and input_size.
`import torch
from torch.nn import Module, ModuleList, BatchNorm2d, MaxPool2d, BatchNorm1d
from brevitas.nn import QuantConv2d, QuantIdentity, QuantLinear
from brevitas.core.restrict_val import RestrictValueType
from .tensor_norm import TensorNorm
from .common import CommonWeightQuant, CommonActQuant
CNV_OUT_CH_POOL = [(64, True), (128, True), (256, False), (256, True), (512, False), (512, True)]
INTERMEDIATE_FC_FEATURES = [(51200, 512), (512, 512)]
LAST_FC_IN_FEATURES = 512
LAST_FC_PER_OUT_CH_SCALING = False
POOL_SIZE = 2
KERNEL_SIZE = 3
class CNV(Module):
def cnv(cfg):
weight_bit_width = cfg.getint('QUANT', 'WEIGHT_BIT_WIDTH')
act_bit_width = cfg.getint('QUANT', 'ACT_BIT_WIDTH')
in_bit_width = cfg.getint('QUANT', 'IN_BIT_WIDTH')
num_classes = cfg.getint('MODEL', 'NUM_CLASSES')
in_channels = cfg.getint('MODEL', 'IN_CHANNELS')
net = CNV(weight_bit_width=weight_bit_width,
act_bit_width=act_bit_width,
in_bit_width=in_bit_width,
num_classes=num_classes,
in_ch=in_channels)
return net`
I've trained my model mainly using https://github.com/Xilinx/brevitas/tree/master/src/brevitas_examples/bnn_pynq.
Then, I exported my model using FINNManager.export. During exportation, I received these warnings below.
After that, I've used same steps in end2end_example/bnn-pynq/cnv_end2end_example to build CNV using FINN. After streamline operation, my nodes was not correct like in the picture above.
Therefore I've removed extra transpose layers and apply MakeMaxPoolNHWC again. As a result, I've received this onnx model.
I've started to apply Maxpool operation, but I'm constantly receiving this error.
Thanks in advance for your help!
Beta Was this translation helpful? Give feedback.
All reactions