-
Notifications
You must be signed in to change notification settings - Fork 143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fine tune predict API: read model from index directly #1557
Conversation
Signed-off-by: Yaliang Wu <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Happy to see no UT change needed for this PR.
@@ -109,8 +105,9 @@ public RestChannelConsumer prepareRequest(RestRequest request, NodeClient client | |||
log.error("Failed to send error response", ex); | |||
} | |||
}); | |||
client.execute(MLModelGetAction.INSTANCE, getModelRequest, listener); | |||
|
|||
try (ThreadContext.StoredContext context = client.threadPool().getThreadContext().stashContext()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we need stashContext
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
model index is system index
Codecov Report
@@ Coverage Diff @@
## main #1557 +/- ##
============================================
+ Coverage 78.39% 78.42% +0.02%
- Complexity 2378 2380 +2
============================================
Files 195 195
Lines 9594 9593 -1
Branches 964 964
============================================
+ Hits 7521 7523 +2
+ Misses 1636 1632 -4
- Partials 437 438 +1
Flags with carried forward coverage won't be shown. Click here to find out more.
|
Signed-off-by: Yaliang Wu <[email protected]> (cherry picked from commit 0920ba7)
Signed-off-by: Yaliang Wu <[email protected]> (cherry picked from commit 0920ba7) Co-authored-by: Yaliang Wu <[email protected]>
…ject#1557) Signed-off-by: Yaliang Wu <[email protected]> Signed-off-by: TrungBui59 <[email protected]>
…ject#1557) Signed-off-by: Yaliang Wu <[email protected]> Signed-off-by: TrungBui59 <[email protected]>
…ject#1557) Signed-off-by: Yaliang Wu <[email protected]>
Description
User needs both get model and predict permission for predict API. That's not reasonable.
Issues Resolved
[List any issues this PR will resolve]
Check List
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.