Skip to content

Conversation

JonasBreuling
Copy link

Added a function call to scipy's numerical derivative for vector valued constraint functions without numerical derivative

…ed constraint functions without numerical derivative

res = minimize_ipopt(rosen, x0, jac=rosen_der, bounds=bounds, constraints=[eq_cons])

print(res)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be ideal to add a unit test that checks this example, or a similar one.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. See here for how we've written other tests that an scipy-optional. The test can simply be a copy (with the small required changes) of any of the three tests in that module currently marked with @pytest.mark.skipif("scipy" not in sys.module).


import cyipopt

from scipy.optimize._numdiff import approx_derivative
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I worry a little if this function should be used, as _numdiff is not a "public" module.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with @moorepants here in principle. Although using finite differencing to get an accurate derivative approximation is complex as an error analysis is required to chose an optimal stepsize that balances truncation and subtractive cancellation. Tapping in to scipy.optimize does seem like the best way to do it without otherwise reimplementing ourselves or introducing an additional dependency.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are sevaral problems I encountered. As can be written here, scipy decided to use approx_derivative instead of approx_fprime.

This has several disadvantages I want to discuss. As you see from the failed pipeline this is not the best function for approximating a numerical derivative. When testing it on my system I had installed scipy==1.4.1. In this version the change was no incorporated. When upgrading to scipy==1.6.1 the tests are failing too.

Led by these observation I've used the implementation given in optpy. If you are willing to review that I will make a new pull request.

But we have to be aware that the results of the test cases are depending on the actual scipy version when no jacobian function is given!

Copy link
Collaborator

@brocksam brocksam left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is a great feature and contribution, thanks. My opinion is that if we are going to supply a numerical approximation on the user's behalf, then we need to do a good job of making sure that it is decently accurate. From my perspective that means going beyond just using a naive finite differencing scheme. Just my opinion though, so a joint consensus from all contributors would be good here.


import cyipopt


Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Leave this blank line here. PEP8 convention is two blank lines between imports and other code.

x0 = [1.3, 0.7, 0.8, 1.9, 1.2]
constr = {"fun": lambda x: rosen(x) - 1.0, "type": "ineq"}
res = cyipopt.minimize_ipopt(rosen, x0, constraints=constr)
print(res)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't have print statements within tests.

expected_res = np.array([1.001867, 0.99434067, 1.05070075, 1.17906312,
1.38103001])
np.testing.assert_allclose(res.get("x"), expected_res)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change

Two blank lines between functions as per PEP8 convention.

assert isinstance(res, dict)
assert res.get("status") == 0
assert res.get("success") is True
np.testing.assert_allclose(res.get("x"), expected_res)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
np.testing.assert_allclose(res.get("x"), expected_res)
np.testing.assert_allclose(res.get("x"), expected_res)

Blank line at end of file.

x0 = np.array([0.5, 0.75])
bounds = [np.array([0, 1]), np.array([-0.5, 2.0])]
expected_res = 0.25 * np.ones_like(x0)
eq_cons = {'fun' : lambda x: x - expected_res, 'type': 'eq'}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
eq_cons = {'fun' : lambda x: x - expected_res, 'type': 'eq'}
eq_cons = {"fun": lambda x: x - expected_res, "type": "eq"}

Reformat to align with rest of module.

cyipopt/utils.py Outdated
Currently contains functions to aid with deprecation within CyIpopt.
Currently contains functions to aid with deprecation within CyIpopt and
comoutation of numerical Jacobians.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
comoutation of numerical Jacobians.
computation of numerical Jacobians.

Fix typo.

cyipopt/utils.py Outdated
jac = np.zeros([len(x0), len(np.atleast_1d(results[0]))])
for i in range(len(x0)):
jac[i] = (results[i + 1] - results[0]) / self.epsilon
return jac.transpose()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return jac.transpose()
return jac.transpose()

Blank line at end of file.

cyipopt/utils.py Outdated
if not key in self.value_cache:
value = self._func(x, *args, **kwargs)
if np.any(np.isnan(value)):
print("Warning! nan function value encountered at {0}".format(x))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally should raise a warning rather than just printing a warning-related message.

@nrontsis
Copy link
Contributor

nrontsis commented Dec 9, 2022

Is there an advantage of using scipy's numerical approximation to derivatives, instead of the one included in Ipopt itself, via the use of settings like jacobian_approximation?

@moorepants
Copy link
Collaborator

It is possible that scipy has some options for numerical derivatives that Ipopt doesn't, which could be exposed. Other than that we'd just need to make sure the derivative estimate setting for ipopt gets set (if that is necessary).

Either option is fine in my opinion, as long as it works and is tested.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants