You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The reason will be displayed to describe this comment to others. Learn more.
I am using svm API to write my own trainer but after looking at this code the cuda-accelerated part is only in cross validation. So I am wondering how can I benefit from cuda if I don't use cross validation?
The reason will be displayed to describe this comment to others. Learn more.
Hi @tpapaz , I have noticed the same thing as @cysin did, that the GPU only takes part in cross-validation.
After some poking around, it seems that the GPU version is slower even if I use the cross-validation, e.g. with a dataset of 200k samples and 16 dimension features, the GPU implementation (based on libsvm 3.1.7, ~25min) is slower than CPU version (latest libsvm 3.20, ~20min).
p.s. My hardware is Intel E5-2620 and Nvidia Tesla K10. The GPU reports only 3% of usage.
p.p.s Also if I use too many samples to train the SVM, say 2million samples, the GPU implementation will burst out a SIGSEGV and leave a HUGE core dump of 2.0TB. Strange.
9c2a156
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am using svm API to write my own trainer but after looking at this code the cuda-accelerated part is only in cross validation. So I am wondering how can I benefit from cuda if I don't use cross validation?
9c2a156
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By the way, I can't find a place to open an issue...
9c2a156
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello cysin,
You are correct, CUDA acceleration takes part only when using cross validation.
Currently there is no other way to benefit from the GPU.
Thanks for taking the time to play around with our implementation.
9c2a156
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @tpapaz , I have noticed the same thing as @cysin did, that the GPU only takes part in cross-validation.
After some poking around, it seems that the GPU version is slower even if I use the cross-validation, e.g. with a dataset of 200k samples and 16 dimension features, the GPU implementation (based on libsvm 3.1.7, ~25min) is slower than CPU version (latest libsvm 3.20, ~20min).
p.s. My hardware is Intel E5-2620 and Nvidia Tesla K10. The GPU reports only 3% of usage.
p.p.s Also if I use too many samples to train the SVM, say 2million samples, the GPU implementation will burst out a SIGSEGV and leave a HUGE core dump of 2.0TB. Strange.
Regards~
9c2a156
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello @tpapaz,
I am trying to use your implementation of the SVM on GPU with the same dataset but there is no acceleration,
Please advice.