ATLAS Offline Software
Public Member Functions | Public Attributes | Private Attributes | List of all members
OnnxRuntimeBase Class Reference

#include <OnnxRuntimeBase.h>

Inheritance diagram for OnnxRuntimeBase:
Collaboration diagram for OnnxRuntimeBase:

Public Member Functions

 OnnxRuntimeBase (TString fileName)
 
 OnnxRuntimeBase ()
 
 ~OnnxRuntimeBase ()
 
void initialize (TString)
 
std::vector< float > runONNXInference (std::vector< float > &inputTensorValues) const
 
std::vector< std::vector< float > > runONNXInference (NetworkBatchInput &inputTensorValues) const
 
std::map< int, Eigen::MatrixXf > runONNXInferenceMultilayerOutput (NetworkBatchInput &inputTensorValues) const
 
const std::vector< int64_t > & getInputNodesDims ()
 
const std::vector< int64_t > & getOutputNodesDims ()
 

Public Attributes

TString m_fileName
 

Private Attributes

std::unique_ptr< Ort::Session > m_session
 ONNX runtime session / model properties. More...
 
std::vector< const char * > m_inputNodeNames
 
std::vector< int64_t > m_inputNodeDims
 
std::vector< const char * > m_outputNodeNames
 
std::vector< int64_t > m_outputNodeDims
 
std::unique_ptr< Ort::Env > m_env
 

Detailed Description

Definition at line 13 of file OnnxRuntimeBase.h.

Constructor & Destructor Documentation

◆ OnnxRuntimeBase() [1/2]

OnnxRuntimeBase::OnnxRuntimeBase ( TString  fileName)

Definition at line 9 of file OnnxRuntimeBase.cxx.

10 {
12 }

◆ OnnxRuntimeBase() [2/2]

OnnxRuntimeBase::OnnxRuntimeBase ( )

Definition at line 14 of file OnnxRuntimeBase.cxx.

14 {}

◆ ~OnnxRuntimeBase()

OnnxRuntimeBase::~OnnxRuntimeBase ( )
inline

Definition at line 25 of file OnnxRuntimeBase.h.

25 {}

Member Function Documentation

◆ getInputNodesDims()

const std::vector<int64_t>& OnnxRuntimeBase::getInputNodesDims ( )
inline

Definition at line 32 of file OnnxRuntimeBase.h.

32 {return m_inputNodeDims;};

◆ getOutputNodesDims()

const std::vector<int64_t>& OnnxRuntimeBase::getOutputNodesDims ( )
inline

Definition at line 33 of file OnnxRuntimeBase.h.

33 {return m_outputNodeDims;};

◆ initialize()

void OnnxRuntimeBase::initialize ( TString  fileName)

Definition at line 16 of file OnnxRuntimeBase.cxx.

17 {
19  //load the onnx model to memory using the path m_path_to_onnx
20  m_env = std::make_unique< Ort::Env >(ORT_LOGGING_LEVEL_WARNING, "");
21 
22  // Set the ONNX runtime session options
23  Ort::SessionOptions session_options;
24  // Set graph optimization level
25  session_options.SetIntraOpNumThreads(1);
26  session_options.SetGraphOptimizationLevel(GraphOptimizationLevel::ORT_ENABLE_EXTENDED);
27  // Create the Ort session
28  m_session = std::make_unique< Ort::Session >(*m_env, m_fileName.Data(), session_options);
29  // Default allocator
30  Ort::AllocatorWithDefaultOptions allocator;
31  // Get the names of the input nodes of the model
32  size_t numInputNodes = m_session->GetInputCount();
33  // Iterate over all input nodes and get the name
34  for (size_t i = 0; i < numInputNodes; i++)
35  {
36  auto name = m_session->GetInputNameAllocated(i, allocator);
37  char* input_name = new char[strlen(name.get()) + 1];
38  strcpy(input_name, name.get());
39 
40  m_inputNodeNames.push_back(input_name);
41  // Get the dimensions of the input nodes,
42  // here we assume that all input nodes have the same dimensions
43  Ort::TypeInfo inputTypeInfo = m_session->GetInputTypeInfo(i);
44  auto tensorInfo = inputTypeInfo.GetTensorTypeAndShapeInfo();
45  m_inputNodeDims = tensorInfo.GetShape();
46  }
47  // Get the names of the output nodes
48  size_t numOutputNodes = m_session->GetOutputCount();
49  // Iterate over all output nodes and get the name
50  for (size_t i = 0; i < numOutputNodes; i++)
51  {
52  auto name = m_session->GetOutputNameAllocated(i, allocator);
53  char* output_name = new char[strlen(name.get()) + 1];
54  strcpy(output_name, name.get());
55  m_outputNodeNames.push_back(output_name);
56  // Get the dimensions of the output nodes
57  // here we assume that all output nodes have the dimensions
58  Ort::TypeInfo outputTypeInfo = m_session->GetOutputTypeInfo(i);
59  auto tensorInfo = outputTypeInfo.GetTensorTypeAndShapeInfo();
60  m_outputNodeDims = tensorInfo.GetShape();
61  }
62 }

◆ runONNXInference() [1/2]

std::vector< std::vector< float > > OnnxRuntimeBase::runONNXInference ( NetworkBatchInput inputTensorValues) const

Definition at line 77 of file OnnxRuntimeBase.cxx.

78 {
79  int batchSize = inputTensorValues.rows();
80  std::vector<int64_t> inputNodeDims = m_inputNodeDims;
81  std::vector<int64_t> outputNodeDims = m_outputNodeDims; //bad. Assumes they all have the same number of nodes.
82 
83  // The first dim node should correspond to the batch size
84  // If it is -1, it is dynamic and should be set to the input size
85  if (inputNodeDims[0] == -1)
86  {
87  inputNodeDims[0] = batchSize;
88  }
89  if (outputNodeDims[0] == -1)
90  {
91  outputNodeDims[0] = batchSize;
92  }
93 
94  if(inputNodeDims[1]*inputNodeDims[2] != inputTensorValues.cols())
95  {
96  throw std::runtime_error("runONNXInference: feature size doesn't match the input size: inputSize required: " + std::to_string(inputNodeDims[1]*inputNodeDims[2]) + " inputSize provided: " + std::to_string(inputTensorValues.cols()));
97  }
98 
99  if (batchSize != 1 && (inputNodeDims[0] != batchSize || outputNodeDims[0] != batchSize))
100  {
101  throw std::runtime_error("runONNXInference: batch size doesn't match the input or output node size");
102  }
103 
104  // Create input tensor object from data values
105  // note: this assumes the model has only 1 input node
106  Ort::MemoryInfo memoryInfo = Ort::MemoryInfo::CreateCpu(OrtArenaAllocator, OrtMemTypeDefault);
107  Ort::Value inputTensor = Ort::Value::CreateTensor<float>(memoryInfo, inputTensorValues.data(), inputTensorValues.size(),inputNodeDims.data(), inputNodeDims.size());
108  // Double-check that inputTensor is a Tensor
109  if (!inputTensor.IsTensor())
110  {
111  throw std::runtime_error("runONNXInference: conversion of input to Tensor failed. ");
112  }
113  // Score model on input tensors, get back output tensors
114  Ort::RunOptions run_options;
115  std::vector<Ort::Value> outputTensors =
116  m_session->Run(run_options, m_inputNodeNames.data(), &inputTensor,
117  m_inputNodeNames.size(), m_outputNodeNames.data(),
118  m_outputNodeNames.size());
119  // Double-check that outputTensors contains Tensors and that the count matches
120  // that of output nodes
121  if (!outputTensors[0].IsTensor() || (outputTensors.size() != m_outputNodeNames.size())) {
122  throw std::runtime_error("runONNXInference: calculation of output failed. ");
123  }
124  // Get pointer to output tensor float values
125  // note: this assumes the model has only 1 output value
126  float* outputTensor = outputTensors.front().GetTensorMutableData<float>();
127  // Get the output values
128  std::vector<std::vector<float>> outputTensorValues(batchSize, std::vector<float>(outputNodeDims[1], -1));
129  for (int i = 0; i < outputNodeDims[0]; i++)
130  {
131  for (int j = 0; j < ((outputNodeDims.size() > 1) ? outputNodeDims[1] : 1); j++)
132  {
133  outputTensorValues[i][j] = outputTensor[i * outputNodeDims[1] + j];
134  }
135  }
136 
137  return outputTensorValues;
138 }

◆ runONNXInference() [2/2]

std::vector< float > OnnxRuntimeBase::runONNXInference ( std::vector< float > &  inputTensorValues) const

Definition at line 65 of file OnnxRuntimeBase.cxx.

66 {
67  NetworkBatchInput vectorInput(1, inputTensorValues.size());
68  for (size_t i = 0; i < inputTensorValues.size(); i++) {
69  vectorInput(0, i) = inputTensorValues[i];
70  }
71  auto vectorOutput = runONNXInference(vectorInput);
72  return vectorOutput[0];
73 }

◆ runONNXInferenceMultilayerOutput()

std::map< int, Eigen::MatrixXf > OnnxRuntimeBase::runONNXInferenceMultilayerOutput ( NetworkBatchInput inputTensorValues) const

Definition at line 143 of file OnnxRuntimeBase.cxx.

144 {
145  const int batchSize = inputTensorValues.rows();
146  std::vector<int64_t> inputNodeDims = m_inputNodeDims;
147  std::vector<int64_t> outputNodeDims = m_outputNodeDims;
148 
149  // The first dim node should correspond to the batch size
150  // If it is -1, it is dynamic and should be set to the input size
151  if (inputNodeDims[0] == -1)
152  {
153  inputNodeDims[0] = batchSize;
154  }
155  if (outputNodeDims[0] == -1)
156  {
157  outputNodeDims[0] = batchSize;
158  }
159 
160  if(inputNodeDims[1] != inputTensorValues.cols())
161  {
162  throw std::runtime_error("runONNXInference: feature size doesn't match the input size: inputSize required: " + std::to_string(inputNodeDims[1]) + " inputSize provided: " + std::to_string(inputTensorValues.cols()));
163  }
164 
165  if (batchSize != 1 &&(inputNodeDims[0] != batchSize || outputNodeDims[0] != batchSize))
166  {
167  throw std::runtime_error("runONNXInference: batch size doesn't match the input or output node size");
168  }
169  // Create input tensor object from data values
170  // note: this assumes the model has only 1 input node
171  Ort::MemoryInfo memoryInfo = Ort::MemoryInfo::CreateCpu(OrtArenaAllocator, OrtMemTypeDefault);
172  Ort::Value inputTensor = Ort::Value::CreateTensor<float>(memoryInfo, inputTensorValues.data(), inputTensorValues.size(), inputNodeDims.data(), inputNodeDims.size());
173  // Double-check that inputTensor is a Tensor
174  if (!inputTensor.IsTensor())
175  {
176  throw std::runtime_error("runONNXInference: conversion of input to Tensor failed. ");
177  }
178  // Score model on input tensors, get back output tensors
179  Ort::RunOptions run_options;
180  std::vector<Ort::Value> outputTensors =
181  m_session->Run(run_options, m_inputNodeNames.data(), &inputTensor,
182  m_inputNodeNames.size(), m_outputNodeNames.data(),
183  m_outputNodeNames.size());
184  // Double-check that outputTensors contains Tensors and that the count matches
185  // that of output nodes
186  if (!outputTensors[0].IsTensor() || (outputTensors.size() != m_outputNodeNames.size())) {
187  throw std::runtime_error("runONNXInference: calculation of output failed. ");
188  }
189  // Get pointers to output tensor float values
190  // note: this assumes the model has multiple output layers
191  std::map<int, Eigen::MatrixXf> outputTensorMap;
192  size_t numOutputNodes = m_session->GetOutputCount();
193  for (size_t i=0; i<numOutputNodes; i++){ // two output layers
194 
195  // retrieve pointer to output float tenor
196  float* output = outputTensors.at(i).GetTensorMutableData<float>();
197  Ort::TypeInfo outputTypeInfo = m_session->GetOutputTypeInfo(i);
198  auto outputTensorInfo = outputTypeInfo.GetTensorTypeAndShapeInfo();
199  // Not all outputNodes have the same shape. Get the new shape.
200  // First dimension should be batch size
201  outputNodeDims = outputTensorInfo.GetShape();
202 
203  int nNodes = outputNodeDims.size() > 1 ? outputNodeDims[1] : 1;
204  Eigen::Matrix<float, Eigen::Dynamic, Eigen::Dynamic> batchMatrix(batchSize, nNodes);
205  for (int j = 0; j < batchSize; j++)
206  {
207  Eigen::VectorXf vec(nNodes);
208  for (int k = 0; k<nNodes; k++)
209  {
210  float val = output[j * outputNodeDims[1] + k];
211  vec(k) = val;
212  }
213  batchMatrix.row(j) = vec;
214  } // batch
215  outputTensorMap[i] = batchMatrix;
216  } // output layers
217  return outputTensorMap;
218 }

Member Data Documentation

◆ m_env

std::unique_ptr< Ort::Env > OnnxRuntimeBase::m_env
private

Definition at line 44 of file OnnxRuntimeBase.h.

◆ m_fileName

TString OnnxRuntimeBase::m_fileName

Definition at line 17 of file OnnxRuntimeBase.h.

◆ m_inputNodeDims

std::vector<int64_t> OnnxRuntimeBase::m_inputNodeDims
private

Definition at line 40 of file OnnxRuntimeBase.h.

◆ m_inputNodeNames

std::vector<const char*> OnnxRuntimeBase::m_inputNodeNames
private

Definition at line 39 of file OnnxRuntimeBase.h.

◆ m_outputNodeDims

std::vector<int64_t> OnnxRuntimeBase::m_outputNodeDims
private

Definition at line 42 of file OnnxRuntimeBase.h.

◆ m_outputNodeNames

std::vector<const char*> OnnxRuntimeBase::m_outputNodeNames
private

Definition at line 41 of file OnnxRuntimeBase.h.

◆ m_session

std::unique_ptr<Ort::Session> OnnxRuntimeBase::m_session
private

ONNX runtime session / model properties.

Definition at line 37 of file OnnxRuntimeBase.h.


The documentation for this class was generated from the following files:
OnnxRuntimeBase::runONNXInference
std::vector< float > runONNXInference(std::vector< float > &inputTensorValues) const
Definition: OnnxRuntimeBase.cxx:65
OnnxRuntimeBase::m_env
std::unique_ptr< Ort::Env > m_env
Definition: OnnxRuntimeBase.h:44
OnnxRuntimeBase::initialize
void initialize(TString)
Definition: OnnxRuntimeBase.cxx:16
OnnxRuntimeBase::m_outputNodeDims
std::vector< int64_t > m_outputNodeDims
Definition: OnnxRuntimeBase.h:42
OnnxRuntimeBase::m_inputNodeDims
std::vector< int64_t > m_inputNodeDims
Definition: OnnxRuntimeBase.h:40
OnnxRuntimeBase::m_session
std::unique_ptr< Ort::Session > m_session
ONNX runtime session / model properties.
Definition: OnnxRuntimeBase.h:33
CxxUtils::vec
typename vecDetail::vec_typedef< T, N >::type vec
Define a nice alias for the vectorized type.
Definition: vec.h:207
OnnxRuntimeBase::m_inputNodeNames
std::vector< const char * > m_inputNodeNames
Definition: OnnxRuntimeBase.h:39
FortranAlgorithmOptions.fileName
fileName
Definition: FortranAlgorithmOptions.py:13
lumiFormat.i
int i
Definition: lumiFormat.py:85
OnnxRuntimeBase::m_fileName
TString m_fileName
Definition: OnnxRuntimeBase.h:17
merge.output
output
Definition: merge.py:17
name
std::string name
Definition: Control/AthContainers/Root/debug.cxx:228
ActsTrk::to_string
std::string to_string(const DetectorType &type)
Definition: GeometryDefs.h:34
OnnxRuntimeBase::m_outputNodeNames
std::vector< const char * > m_outputNodeNames
Definition: OnnxRuntimeBase.h:41
Pythia8_RapidityOrderMPI.val
val
Definition: Pythia8_RapidityOrderMPI.py:14
NetworkBatchInput
Eigen::Matrix< float, Eigen::Dynamic, Eigen::Dynamic, Eigen::RowMajor > NetworkBatchInput
Definition: OnnxRuntimeBase.h:9
fitman.k
k
Definition: fitman.py:528