Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError with GPU training #71

Open
ShijianXu opened this issue Nov 27, 2024 · 2 comments
Open

TypeError with GPU training #71

ShijianXu opened this issue Nov 27, 2024 · 2 comments

Comments

@ShijianXu
Copy link

Dear authors,

I am running the following code and get the TypeError:

model = LassoNetRegressor()

model.fit(X=train_df.drop('target', axis=1), 
          y=train_df['target'],
          X_val=val_df.drop('target', axis=1),
          y_val=val_df['target']
          )

model.score(test_df.drop('target', axis=1), test_df['target'])

The error:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Cell In[12], line 9
      1 model = LassoNetRegressor()
      3 model.fit(X=train_df.drop('target', axis=1), 
      4           y=train_df['target'],
      5           X_val=val_df.drop('target', axis=1),
      6           y_val=val_df['target']
      7           )
----> 9 model.score(test_df.drop('target', axis=1), test_df['target'])
     11 # print("Best model scored", model.score(test_df.drop('target', axis=1), test_df['target']))
     12 # print("Lambda =", model.best_lambda_)

File ~/miniconda3/envs/pytorch/lib/python3.9/site-packages/sklearn/base.py:849, in RegressorMixin.score(self, X, y, sample_weight)
    846 from .metrics import r2_score
    848 y_pred = self.predict(X)
--> 849 return r2_score(y, y_pred, sample_weight=sample_weight)

File ~/miniconda3/envs/pytorch/lib/python3.9/site-packages/sklearn/utils/_param_validation.py:213, in validate_params.<locals>.decorator.<locals>.wrapper(*args, **kwargs)
    207 try:
    208     with config_context(
    209         skip_parameter_validation=(
    210             prefer_skip_nested_validation or global_skip_validation
    211         )
    212     ):
--> 213         return func(*args, **kwargs)
    214 except InvalidParameterError as e:
    215     # When the function is just a wrapper around an estimator, we allow
    216     # the function to delegate validation to the estimator, but we replace
    217     # the name of the estimator by the name of the function in the error
    218     # message to avoid confusion.
    219     msg = re.sub(
    220         r"parameter of \w+ must be",
    221         f"parameter of {func.__qualname__} must be",
    222         str(e),
    223     )

File ~/miniconda3/envs/pytorch/lib/python3.9/site-packages/sklearn/metrics/_regression.py:1180, in r2_score(y_true, y_pred, sample_weight, multioutput, force_finite)
   1039 @validate_params(
   1040     {
   1041         "y_true": ["array-like"],
   (...)
   1059     force_finite=True,
   1060 ):
   1061     """:math:`R^2` (coefficient of determination) regression score function.
   1062 
   1063     Best possible score is 1.0 and it can be negative (because the
   (...)
   1178     -inf
   1179     """
-> 1180     y_type, y_true, y_pred, multioutput = _check_reg_targets(
   1181         y_true, y_pred, multioutput
   1182     )
   1183     check_consistent_length(y_true, y_pred, sample_weight)
   1185     if _num_samples(y_pred) < 2:

File ~/miniconda3/envs/pytorch/lib/python3.9/site-packages/sklearn/metrics/_regression.py:104, in _check_reg_targets(y_true, y_pred, multioutput, dtype)
    102 check_consistent_length(y_true, y_pred)
    103 y_true = check_array(y_true, ensure_2d=False, dtype=dtype)
--> 104 y_pred = check_array(y_pred, ensure_2d=False, dtype=dtype)
    106 if y_true.ndim == 1:
    107     y_true = y_true.reshape((-1, 1))

File ~/miniconda3/envs/pytorch/lib/python3.9/site-packages/sklearn/utils/validation.py:997, in check_array(array, accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, estimator, input_name)
    995         array = xp.astype(array, dtype, copy=False)
    996     else:
--> 997         array = _asarray_with_order(array, order=order, dtype=dtype, xp=xp)
    998 except ComplexWarning as complex_warning:
    999     raise ValueError(
   1000         "Complex data not supported\n{}\n".format(array)
   1001     ) from complex_warning

File ~/miniconda3/envs/pytorch/lib/python3.9/site-packages/sklearn/utils/_array_api.py:521, in _asarray_with_order(array, dtype, order, copy, xp)
    519     array = numpy.array(array, order=order, dtype=dtype)
    520 else:
--> 521     array = numpy.asarray(array, order=order, dtype=dtype)
    523 # At this point array is a NumPy ndarray. We convert it to an array
    524 # container that is consistent with the input's namespace.
    525 return xp.asarray(array)

File ~/miniconda3/envs/pytorch/lib/python3.9/site-packages/torch/_tensor.py:1062, in Tensor.__array__(self, dtype)
   1060     return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
   1061 if dtype is None:
-> 1062     return self.numpy()
   1063 else:
   1064     return self.numpy().astype(dtype, copy=False)

TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

Since I am using the interface of LassoNet, I don't think I have much flexibility to modify the code.
Do you have any idea what might cause this error and how should I fix it?

Thank you very much!

@louisabraham
Copy link
Collaborator

louisabraham commented Nov 27, 2024

We could handle this in the library but in the meantime, you can probably call predict yourself, then move the result to CPU before calling the scoring function on the output.

@ShijianXu
Copy link
Author

Thank you very much for such a quick reply! This suggestion temporarily solves my problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants