-
Notifications
You must be signed in to change notification settings - Fork 133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem loading model state #97
Comments
Hello @amilton-reis I am facing the same problem. It happened to me when I tried to load a deepmatcher model in Databricks. I could not find the solution yet. Anyone can help? |
DeepMatcher is passing in input an AttrTensor, but pytorch expects a Tensor. Anyone found a solution? |
Dear @sidharthms, Would you have any advice? Thanks in advance |
Could you share a colab notebook replicating the issue? Knowing the
sequence of events that lead to this will make it easier to debug.
…On Mon, May 2, 2022, 12:29 PM JoaoGF21 ***@***.***> wrote:
Hello @amilton-reis <https://github.com/amilton-reis>
I am facing the same problem.
It happened to me when I tried to load a deepmatcher model in Databricks.
I could not find the solution yet.
Anyone can help?
—
Reply to this email directly, view it on GitHub
<#97 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABWE5ZJH5DQWMVTU6ZHUCPLVIAUJBANCNFSM5UWJNRNQ>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi,
I'm trying using the .load_state() function, but the result is:
TypeError Traceback (most recent call last)
in
1 model_t = dm.MatchingModel(attr_summarizer="hybrid")
----> 2 model_t.load_state("model_1.pth")
/local_disk0/.ephemeral_nfs/envs/pythonEnv-33cf32f5-60db-4ff5-acdf-2c87c014459e/lib/python3.8/site-packages/deepmatcher/models/core.py in load_state(self, path, map_location)
479 MatchingDataset.finalize_metadata(train_info)
480
--> 481 self.initialize(train_info, self.state_meta.init_batch)
482
483 self.load_state_dict(state['model'])
/local_disk0/.ephemeral_nfs/envs/pythonEnv-33cf32f5-60db-4ff5-acdf-2c87c014459e/lib/python3.8/site-packages/deepmatcher/models/core.py in initialize(self, train_dataset, init_batch)
353 sort_in_buckets=False)
354 init_batch = next(run_iter.iter())
--> 355 self.forward(init_batch)
356
357 # Keep this init_batch for future initializations.
/local_disk0/.ephemeral_nfs/envs/pythonEnv-33cf32f5-60db-4ff5-acdf-2c87c014459e/lib/python3.8/site-packages/deepmatcher/models/core.py in forward(self, input)
417 for name in self.meta.all_text_fields:
418 attr_input = getattr(input, name)
--> 419 embeddings[name] = self.embedname
420
421 attr_comparisons = []
/local_disk0/.ephemeral_nfs/envs/pythonEnv-33cf32f5-60db-4ff5-acdf-2c87c014459e/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
/local_disk0/.ephemeral_nfs/envs/pythonEnv-33cf32f5-60db-4ff5-acdf-2c87c014459e/lib/python3.8/site-packages/deepmatcher/models/modules.py in forward(self, *args)
185 module_args.append(arg.data if isinstance(arg, AttrTensor) else arg)
186
--> 187 results = self.module(*module_args)
188
189 if not isinstance(args[0], AttrTensor):
/local_disk0/.ephemeral_nfs/envs/pythonEnv-33cf32f5-60db-4ff5-acdf-2c87c014459e/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
/local_disk0/.ephemeral_nfs/envs/pythonEnv-33cf32f5-60db-4ff5-acdf-2c87c014459e/lib/python3.8/site-packages/torch/nn/modules/sparse.py in forward(self, input)
143
144 def forward(self, input: Tensor) -> Tensor:
--> 145 return F.embedding(
146 input, self.weight, self.padding_idx, self.max_norm,
147 self.norm_type, self.scale_grad_by_freq, self.sparse)
/local_disk0/.ephemeral_nfs/envs/pythonEnv-33cf32f5-60db-4ff5-acdf-2c87c014459e/lib/python3.8/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1911 # remove once script supports set_grad_enabled
1912 no_grad_embedding_renorm(weight, input, max_norm, norm_type)
-> 1913 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1914
1915
TypeError: embedding(): argument 'indices' (position 2) must be Tensor, not AttrTensor
The text was updated successfully, but these errors were encountered: