-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Eigen Support #90
Comments
As for now, I find a combination of #include <XAD/XAD.hpp>
#include <Eigen/Dense>
#include <iostream>
template <typename T>
T norm(const std::vector<T> &x) {
T r = 0.0;
for (T v : x) {
r = r + v * v;
}
r = sqrt(r);
return r;
}
template <typename T>
T normEigen(const std::vector<T> &x) {
using namespace Eigen;
typedef Matrix<T, 1, Dynamic> VectorType;
typedef Map<const VectorType> MapConstVectorType;
MapConstVectorType xmap(x.data(), x.size());
T r = xmap.norm();
return r;
}
int main() {
// types for first-order adjoints in double precision
using mode = xad::adj<double>;
using Adouble = mode::active_type;
using Tape = mode::tape_type;
// variables
std::vector<Adouble> x = {1.0, 1.5, 1.3, 1.2};
Adouble y;
// start taping
Tape tape;
tape.registerInputs(x);
// Manual norm
tape.newRecording();
y = norm(x);
tape.registerOutput(y);
derivative(y) = 1.0; // seed output adjoint
tape.computeAdjoints(); // roll-back tape
std::cout << "Using manual: \n"
<< "y = " << value(y) << "\n"
<< "dy/dx0 = " << derivative(x[0]) << "\n"
<< "dy/dx1 = " << derivative(x[1]) << "\n"
<< "dy/dx2 = " << derivative(x[2]) << "\n"
<< "dy/dx3 = " << derivative(x[3]) << "\n\n";
// Eigen norm
tape.newRecording();
y = normEigen(x);
tape.registerOutput(y);
derivative(y) = 1.0; // seed output adjoint
tape.computeAdjoints(); // roll-back tape
std::cout << "Using Eigen: \n"
<< "y = " << value(y) << "\n"
<< "dy/dx0 = " << derivative(x[0]) << "\n"
<< "dy/dx1 = " << derivative(x[1]) << "\n"
<< "dy/dx2 = " << derivative(x[2]) << "\n"
<< "dy/dx3 = " << derivative(x[3]) << "\n";
} |
That's interesting, thanks for sharing this. Note that For the Did you try more complex matrix/vector operations than |
As you said #include <XAD/XAD.hpp>
#include <Eigen/Dense>
#include <iostream>
int main() {
// types for first-order adjoints in double precision
using mode = xad::adj<double>;
using Adouble = mode::active_type;
using Tape = mode::tape_type;
// variables
typedef Eigen::Matrix<Adouble, Eigen::Dynamic, Eigen::Dynamic> MatrixType;
MatrixType x(2, 2);
x(0, 0) = 1.0;
x(0, 1) = 1.5;
x(1, 0) = 1.3;
x(1, 1) = 1.2;
// Adouble y;
MatrixType y;
// start taping
Tape tape;
tape.registerInputs(x.reshaped().begin(), x.reshaped().end());
// Manual norm
tape.newRecording();
y = x.inverse();
for(Adouble &yy : y.reshaped()) {
tape.registerOutput(yy);
derivative(yy) = 1.0; // seed output adjoint
}
tape.computeAdjoints(); // roll-back tape
for(Adouble &xx : x.reshaped()) {
std::cout << "x = " << value(xx) << "\n";
}
for(Adouble &yy : y.reshaped()) {
std::cout << "y = " << value(yy) << "\n";
}
std::cout << "Derivative of y = inv(x): \n";
for(Adouble &xx : x.reshaped()) {
std::cout << "dy/dx = " << derivative(xx) << "\n";
}
} I was able to do norms, matmuls and inverse operations without much efforts. Although, I have not checked the computational efficiency of computing the derivative of the inverse operation. But, I did the numerical check and I got the same results from PyTorch, which is a good sign. |
XAD should support Eigen data types and related linear algebra operations, calculating values and derivatives efficiently.
Ideally, simply using an XAD type within Eigen vectors or matrices should work out of the box and be efficient. Given that both Eigen and XAD are using expression templates, it may require some traits and template specialisations to make this work seamlessly - and efficiently.
The text was updated successfully, but these errors were encountered: