While MUQ has many built-in ModPieces, using MUQ on a new application typically requires creating one or more new model components. This is accomplished by creating a new class that inherits from ModPiece
and overides the evaluation method (and optionally the derivative functions). To demonstrate this process, consider a simple ModPiece
that evaluates a scalar linear regression model at several points to produce a single vector of predictions at those points. Let \(y_i\in\mathbb{R}\) denote the model prediction at a point \(x_i\in\mathbb{R}\). The model is given by
\[ y_i = c_1 x_i + c_2, \]
where \(c=[c_1,c_2]\) is a vector of coefficients. The vectors \(x\) and \(c\) can be inputs to the ModPiece and the vector \(y\) will be the only component in the vector of ModPiece outputs. To define this ModPiece, we need to create a new class that inherits from ModPiece
, tells the parent ModPiece
constructor the sizes of the inputs and outputs, and then overrides the abstract EvalauteImpl
function in the ModPiece
base class. Below is an example of how this could be accomplished
#include "MUQ/Modeling/ModPiece.h"
class SimpleModel : public muq::Modeling::ModPiece
{
public:
SimpleModel(unsigned int numPts) : muq::Modeling::ModPiece({numPts,2},{numPts}) {};
protected:
void EvaluateImpl(std::ref_vector<Eigen::VectorXd> const& inputs) override {
Eigen::VectorXd const& x = inputs.at(0).get();
Eigen::VectorXd const& c = inputs.at(1).get();
Eigen::VectorXd y = c(0)*x + c(1)*Eigen::VectorXd::Ones(x.size());
outputs.resize(1);
outputs.at(0) = y;
}
};
ModPiece
constructor expects either an Eigen::VectorXi
or a std::vector<int>
in c++ and a list or numpy array of ints in python.
Note that EvaluateImpl
function does not return the result. Instead, it sets the member variable outputs
, which is a std::vector
of Eigen::VectorXd
vectors in c++ and a list of numpy arrays in Python. Setting outputs
instead of returning a vector allows MUQ to reduce the number of times data is copied in large computational graphs. It also enables easy one-step caching, which prevents consecutive calls to the EvaluateImpl
with the same inputs.
Using the SimpleModel
ModPiece is identical to using the native ModPieces provided by MUQ:
unsigned int numPts = 10;
Eigen::VectorXd x = Eigen::VectorXd::Random(numPts);
Eigen::VectorXd c(2);
c << 1.0, 0.5;
auto mod = std::make_shared<SimpleModel>(numPts);
Eigen::VectorXd y = mod->Evaluate(x,c).at(0);
Notice that the EvaluateImpl
function is defined in the SimpleModel
class, but we call the Evaluate
function when evaluating the model. The Evaluate
function is defined in the parent ModPiece
class and calls the EvaluateImpl
function. The parent Evaluate
function also checks the size of the input, supports one-step caching of the evaluation, and keeps track of average runtimes.
The EvaluateImpl
function is the only thing that must be implemented to define a new ModPiece. All unimplemented derivative functions (e.g., Jacobian
, Gradient
, etc...) will default to finite difference implementations. However, overriding the finite difference implementation is advantageous if derivatives can be computed analytically. This can be accomplished in MUQ by implementing one or more of the JacobianImpl
, ApplyJacobianImpl
, GradientImpl
, or ApplyHessianImpl
functions. Extending the SimpleModel
definition above, all of the derivative functions could be implemented as
class SimpleModel : public muq::Modeling::ModPiece
{
public:
SimpleModel(int numPts) : muq::Modeling::ModPiece({numPts,2},{numPts}) {};
protected:
void EvaluateImpl(muq::Modeling::ref_vector<Eigen::VectorXd> const& inputs) override
{
Eigen::VectorXd const& x = inputs.at(0).get();
Eigen::VectorXd const& c = inputs.at(1).get();
Eigen::VectorXd y = c(0)*x + c(1)*Eigen::VectorXd::Ones(x.size());
outputs.resize(1);
outputs.at(0) = y;
};
virtual void JacobianImpl(unsigned int outWrt,
unsigned int inWrt,
muq::Modeling::ref_vector<Eigen::VectorXd> const& inputs) override
{
Eigen::VectorXd const& x = inputs.at(0).get();
Eigen::VectorXd const& c = inputs.at(1).get();
// Jacobian wrt x
if(inWrt==0){
jacobian = c(0)*Eigen::VectorXd::Identity(x.size(), x.size());
// Jacobian wrt c
}else{
jacobian = Eigen::MatrixXd::Ones(outputSizes(0), inputSizes(inWrt));
jacobian.col(0) = x;
}
}
virtual void GradientImpl(unsigned int outWrt,
unsigned int inWrt,
muq::Modeling::ref_vector<Eigen::VectorXd> const& inputs,
Eigen::VectorXd const& sens) override
{
Eigen::VectorXd const& x = inputs.at(0).get();
Eigen::VectorXd const& c = inputs.at(1).get();
// Gradient wrt x
if(inWrt==0){
gradient = c(0) * sens;
// Gradient wrt c
}else{
gradient.resize(2);
gradient(0) = x.dot(sens);
gradient(1) = sens.sum();
}
}
virtual void ApplyJacobianImpl(unsigned int outWrt,
unsigned int inWrt,
muq::Modeling::ref_vector<Eigen::VectorXd> const& inputs,
Eigen::VectorXd const& vec) override
{
Eigen::VectorXd const& x = inputs.at(0).get();
Eigen::VectorXd const& c = inputs.at(1).get();
// Jacobian wrt x
if(inWrt==0){
jacobianAction = c(0)*vec;
// Jacobian wrt c
}else{
jacobianAction = vec(0)*x + vec(1)*Eigen::VectorXd::Ones(x.size());
}
}
virtual void ApplyHessianImpl(unsigned int outWrt,
unsigned int inWrt1,
unsigned int inWrt2,
muq::Modeling::ref_vector<Eigen::VectorXd> const& inputs,
Eigen::VectorXd const& sens,
Eigen::VectorXd const& vec) override
{
Eigen::VectorXd const& x = inputs.at(0).get();
Eigen::VectorXd const& c = inputs.at(1).get();
// Apply d^2 / dxdc
if((inWrt1==0)&&(inWrt2==1)){
hessAction = vec(0) * sens;
// Apply d^2 / dcdx
}else if((inWrt2==0)&&(inWrt1==1)){
hessAction.resize(2);
hessAction(0) = sens.dot(vec);
hessAction(1) = 0;
// Apply d^2 / dxds
}else if((inWrt1==0)&&(inWrt2==2)){
hessAction = c(0) * vec;
// Apply d^2 / dcds
}else if((inWrt1==1)&&(inWrt2==2)){
hessAction.resize(2);
hessAction(0) = x.dot(vec);
hessAction(1) = vec.sum();
// Apply d^2/dx^2 or d^2/dc^2 or d^2/ds^2 or d^2 / dsdx or d^2 / dsdc
}else{
hessAction = Eigen::VectorXd::Zero(inputSizes(inWrt1));
}
}
}; // end of class SimpleModel