-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question regarding convergence of integer optimization #542
Comments
Hey @yolking, would it be possible for you to post a complete snippet, including the function you're optimizing, so that I can run the code and check for myself? |
I assume the problem is fixed. Feel free to re-open if you encounter it again. |
Issue isn't fixed, it's just this function is estimated on specific dataframes and I can't share it all here easily publicly. I'll reopen it if I come up with some simple example, for now I focused on CMA libraries where this issue doesn't happen, though they have other smaller issues. |
I don't have permissions to reopen issue. Hope you'll see this comment.
Maybe it cannot find optimum due to border solution, but also it continues looking very far from it even 300 iterations later. 0 -45769 : ( 103, 436, 861) |
I tried completely dropping integers and on floats it takes much more iterations while exploring completely random looking points, so I guess it's not really integer problem. |
Hi @yolking, I had a bit of a look at your problem. Let me start by saying that purely non-continuous optimization is not the intended, and definitely not the ideal use-case for this package. The problem is essentially the optimization of the acquisition function, which, for non-continuous parameters, is based on random sampling. This means that having while a guess close to the optimal solution might mean that the optimal solution is rated highly by the acquisition function (though this is by all means not guaranteed), if the optimal point is never produced by the random sampling then it will simply never be suggested. One way to deal with the problem would be to simply suggest all possible points at any step, but in case of your problem, this is 1 billion points at any step, which is massive (might still be feasible depending on your machine...). Depending on how much time you want to invest, you could overwrite the suggest step of the acquisition function to use a different optimization method for finding the maximum of the acquisition function. Sorry I couldn't be of more help. |
Okay thank you. That answers half of my question. But what about continuous case, meaning
Running same code with continuous bounds and looking at printed algorithm guesses it looks like it has only very rough idea of global optimum direction. Do you consider this behavior normal?
Bayesian optimization produces this after 700:
Since it's like this even in continuous case, I would consider BO inefficient in convergence to single global optimum after reaching some close proximity of it or suspect a bug. |
Hello! I am using pre-release version because I am interested in integer optimization. So far on the one hand I get pretty fast close to global optimum using it.
But BO seems to make very little effort to improve found optimum and continues to look far away from found optimal point. I checked docs for possible solutions to make it look closer to found optimum.
SequentialDomainReductionTransformer
may have been one of them, but it isn't available currently for integer optimization. Changing acquisition function after N unsuccessful iterations toacquisition.ExpectedImprovement(xi=0.0)
seems like another possible solution, but it doesn't seem to do much.
Here is chunk of log of my optimization. The best optimum was found on 270 evaluation 0.5739394489614946 at [129, 2,740] and after that for 150 iterations no better point was found. Unreached global optimum is 0.5751567341679235 at [128, 2, 711]
The text was updated successfully, but these errors were encountered: