Skip to content

Commit

Permalink
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Fix (QuantLayer): make bias optional for QuantRNN and QuantLSTM
Browse files Browse the repository at this point in the history
fabianandresgrob committed Feb 14, 2024

Unverified

This commit is not signed, but one or more authors requires that any commit attributed to them is signed.
1 parent d58dc90 commit 9675e4c
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions src/brevitas/nn/quant_rnn.py
Original file line number Diff line number Diff line change
@@ -882,7 +882,7 @@ def __init__(
hidden_size: int,
num_layers: int = 1,
nonlinearity: str = 'tanh',
bias: bool = True,
bias: Optional[bool] = True,
batch_first: bool = False,
bidirectional: bool = False,
weight_quant=Int8WeightPerTensorFloat,
@@ -921,7 +921,7 @@ def __init__(
input_size: int,
hidden_size: int,
num_layers: int = 1,
bias: bool = True,
bias: Optional[bool] = True,
batch_first: bool = False,
bidirectional: bool = False,
weight_quant=Int8WeightPerTensorFloat,

0 comments on commit 9675e4c

Please sign in to comment.