) where absolute value, y = abs(x), is applied to\nthe tensor elementwise.\n",
"arguments": [
"X"
],
@@ -190,7 +190,7 @@
"expression_string": "onnx_ops.atanh(input)"
},
"onnx::AveragePool": {
- "description": "\n AveragePool consumes an input tensor X and applies average pooling across\n the tensor according to kernel sizes, stride sizes, and pad lengths.\n average pooling consisting of computing the average on all values of a\n subset of the input tensor according to the kernel size and downsampling the\n data into the output tensor Y for further processing. The output spatial shape will be following:\n ```\n output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1)) / strides_spatial_shape[i] + 1)\n ```\n or\n ```\n output_spatial_shape[i] = ceil((input_spatial_shape[i] + pad_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1)) / strides_spatial_shape[i] + 1)\n ```\n if ceil_mode is enabled\n\n ```\n * pad_shape[i] is sum of pads along axis i\n ```\n\n `auto_pad` is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following:\n ```\n VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) + 1) / strides_spatial_shape[i])\n SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])\n ```\n And pad shape will be following if `SAME_UPPER` or `SAME_LOWER`:\n ```\n pad_shape[i] = (output_spatial_shape[i] - 1) * strides_spatial_shape[i] + ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) - input_spatial_shape[i]\n ```\n The output of each pooling window is divided by the number of elements (exclude pad when attribute count_include_pad is zero).\n ",
+ "description": "\n AveragePool consumes an input tensor X and applies average pooling across\n the tensor according to kernel sizes, stride sizes, and pad lengths.\n average pooling consisting of computing the average on all values of a\n subset of the input tensor according to the kernel size and downsampling the\n data into the output tensor Y for further processing. The output spatial shape will be following:\n ```\n output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1)) / strides_spatial_shape[i] + 1)\n ```\n or\n ```\n output_spatial_shape[i] = ceil((input_spatial_shape[i] + pad_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1)) / strides_spatial_shape[i] + 1)\n ```\n if ceil_mode is enabled\n\n ```\n * pad_shape[i] is sum of pads along axis i\n ```\n\n `auto_pad` is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following when ceil_mode is enabled:\n ```\n VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) + 1) / strides_spatial_shape[i])\n SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])\n ```\nor when ceil_mode is disabled:\n ```\n VALID: output_spatial_shape[i] = floor((input_spatial_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) + 1) / strides_spatial_shape[i])\n SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = floor(input_spatial_shape[i] / strides_spatial_shape[i])\n ```\n\n And pad shape will be following if `SAME_UPPER` or `SAME_LOWER`:\n ```\n pad_shape[i] = (output_spatial_shape[i] - 1) * strides_spatial_shape[i] + ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) - input_spatial_shape[i]\n ```\n The output of each pooling window is divided by the number of elements (exclude pad when attribute count_include_pad is zero).\n ",
"arguments": [
"X"
],
@@ -385,14 +385,14 @@
"expression_string": "onnx_ops.dropout(data, ratio, training_mode, seed)"
},
"onnx::DynamicQuantizeLinear": {
- "description": "\nA Function to fuse calculation for Scale, Zero Point and FP32->8Bit convertion of FP32 Input data.\nOutputs Scale, ZeroPoint and Quantized Input for a given FP32 Input.\nScale is calculated as:\n```\ny_scale = (max(x) - min(x))/(qmax - qmin)\n```\n\n* where qmax and qmin are max and min values for quantization range .i.e [0, 255] in case of uint8\n* data range is adjusted to include 0.\n\nZero point is calculated as:\n```\nintermediate_zero_point = qmin - min(x)/y_scale\ny_zero_point = cast(round(saturate(itermediate_zero_point)))\n```\n\n* where qmax and qmin are max and min values for quantization range .i.e [0, 255] in case of uint8\n* for saturation, it saturates to [0, 255] if it's uint8, or [-127, 127] if it's int8. Right now only uint8 is supported.\n* rounding to nearest ties to even.\n\nData quantization formula is:\n```\ny = saturate (round (x / y_scale) + y_zero_point)\n```\n\n* for saturation, it saturates to [0, 255] if it's uint8, or [-127, 127] if it's int8. Right now only uint8 is supported.\n* rounding to nearest ties to even.\n",
+ "description": "\nA Function to fuse calculation for Scale, Zero Point and FP32->8Bit conversion of FP32 Input data.\nOutputs Scale, ZeroPoint and Quantized Input for a given FP32 Input.\nScale is calculated as:\n```\ny_scale = (maximum(0, max(x)) - minimum(0, min(x))) / (qmax - qmin)\n```\n\n* where qmax and qmin are max and min values for quantization range i.e. [0, 255] in case of uint8\n* data range is adjusted to include 0.\n\nZero point is calculated as:\n```\nintermediate_zero_point = qmin - min(x)/y_scale\ny_zero_point = cast(round(saturate(itermediate_zero_point)))\n```\n\n* where qmax and qmin are max and min values for quantization range .i.e [0, 255] in case of uint8\n* for saturation, it saturates to [0, 255] if it's uint8, or [-127, 127] if it's int8. Right now only uint8 is supported.\n* rounding to nearest ties to even.\n\nData quantization formula is:\n```\ny = saturate (round (x / y_scale) + y_zero_point)\n```\n\n* for saturation, it saturates to [0, 255] if it's uint8, or [-127, 127] if it's int8. Right now only uint8 is supported.\n* rounding to nearest ties to even.\n",
"arguments": [
"x"
],
"expression_string": "onnx_ops.dynamicquantizelinear(x)"
},
"onnx::Einsum": {
- "description": "\nAn einsum of the form `term1, term2 -> output-term` produces an output tensor using the following equation\n\n```\noutput[output-term] = reduce-sum( input1[term1] * input2[term] )\n```\n\nwhere the reduce-sum performs a summation over all the indices occurring in the input terms (term1, term2)\nthat do not occur in the output-term.\n\nThe Einsum operator evaluates algebraic tensor operations on a sequence of tensors, using the Einstein summation\nconvention. The equation string contains a comma-separated sequence of lower case letters. Each term corresponds to\nan operand tensor, and the characters within the terms correspond to operands dimensions.\n\nThis sequence may be followed by \"->\" to separate the left and right hand side of the equation.\nIf the equation contains \"->\" followed by the right-hand side, the explicit (not classical) form of the Einstein\nsummation is performed, and the right-hand side indices indicate output tensor dimensions. In other cases,\noutput indices are (implicitly) set to the alphabetically sorted sequence of indices appearing exactly once in the\nequation.\n\nWhen a dimension character is repeated in the left-hand side, it represents summation along the dimension.\n\nThe equation may contain ellipsis (\"...\") to enable broadcasting. Ellipsis must indicate a fixed number of dimensions.\nSpecifically, every occurrence of ellipsis in the equation must represent the same number of dimensions.\nThe right-hand side may contain exactly one ellipsis. In implicit mode, the ellipsis dimensions are set to the\nbeginning of the output. The equation string may contain space (U+0020) character.\n",
+ "description": "\nAn einsum of the form `term1, term2 -> output-term` produces an output tensor using the following equation\n\n```\noutput[output-term] = reduce-sum( input1[term1] * input2[term2] )\n```\n\nwhere the reduce-sum performs a summation over all the indices occurring in the input terms (term1, term2)\nthat do not occur in the output-term.\n\nThe Einsum operator evaluates algebraic tensor operations on a sequence of tensors, using the Einstein summation\nconvention. The equation string contains a comma-separated sequence of lower case letters. Each term corresponds to\nan operand tensor, and the characters within the terms correspond to operands dimensions.\n\nThis sequence may be followed by \"->\" to separate the left and right hand side of the equation.\nIf the equation contains \"->\" followed by the right-hand side, the explicit (not classical) form of the Einstein\nsummation is performed, and the right-hand side indices indicate output tensor dimensions. In other cases,\noutput indices are (implicitly) set to the alphabetically sorted sequence of indices appearing exactly once in the\nequation.\n\nWhen a dimension character is repeated in the left-hand side, it represents summation along the dimension.\n\nThe equation may contain ellipsis (\"...\") to enable broadcasting. Ellipsis must indicate a fixed number of dimensions.\nSpecifically, every occurrence of ellipsis in the equation must represent the same number of dimensions.\nThe right-hand side may contain exactly one ellipsis. In implicit mode, the ellipsis dimensions are set to the\nbeginning of the output. The equation string may contain space (U+0020) character.\n",
"arguments": [
"Inputs"
],
@@ -485,7 +485,7 @@
"expression_string": "onnx_ops.gatherelements(data, indices, axis)"
},
"onnx::GatherND": {
- "description": "\nGiven `data` tensor of rank `r` >= 1, `indices` tensor of rank `q` >= 1, and `batch_dims` integer `b`, this operator gathers\nslices of `data` into an output tensor of rank `q + r - indices_shape[-1] - 1 - b`.\n\n`indices` is an q-dimensional integer tensor, best thought of as a `(q-1)`-dimensional tensor of index-tuples into `data`,\nwhere each element defines a slice of `data`\n\n`batch_dims` (denoted as `b`) is an integer indicating the number of batch dimensions, i.e the leading `b` number of dimensions of\n`data` tensor and `indices` are representing the batches, and the gather starts from the `b+1` dimension.\n\nSome salient points about the inputs' rank and shape:\n\n1) r >= 1 and q >= 1 are to be honored. There is no dependency condition to be met between ranks `r` and `q`\n\n2) The first `b` dimensions of the shape of `indices` tensor and `data` tensor must be equal.\n\n3) b < min(q, r) is to be honored.\n\n4) The `indices_shape[-1]` should have a value between 1 (inclusive) and rank `r-b` (inclusive)\n\n5) All values in `indices` are expected to be within bounds [-s, s-1] along axis of size `s` (i.e.) `-data_shape[i] <= indices[...,i] <= data_shape[i] - 1`.\n It is an error if any of the index values are out of bounds.\n\nThe output is computed as follows:\n\nThe output tensor is obtained by mapping each index-tuple in the `indices` tensor to the corresponding slice of the input `data`.\n\n1) If `indices_shape[-1] > r-b` => error condition\n\n2) If `indices_shape[-1] == r-b`, since the rank of `indices` is `q`, `indices` can be thought of as `N` `(q-b-1)`-dimensional tensors\n containing 1-D tensors of dimension `r-b`, where `N` is an integer equals to the product of 1 and all the elements in the batch dimensions\n of the indices_shape. Let us think of each such `r-b` ranked tensor as `indices_slice`. Each *scalar value* corresponding to `data[0:b-1,indices_slice]`\n is filled into the corresponding location of the `(q-b-1)`-dimensional tensor to form the `output` tensor (Example 1 below)\n\n3) If `indices_shape[-1] < r-b`, since the rank of `indices` is `q`, `indices` can be thought of as `N` `(q-b-1)`-dimensional tensor\n containing 1-D tensors of dimension `< r-b`. Let us think of each such tensors as `indices_slice`. Each *tensor slice* corresponding\n to `data[0:b-1, indices_slice , :]` is filled into the corresponding location of the `(q-b-1)`-dimensional tensor\n to form the `output` tensor (Examples 2, 3, 4 and 5 below)\n\nThis operator is the inverse of `ScatterND`.\n\n`Example 1`\n\n batch_dims = 0\n\n data = [[0,1],[2,3]] # data_shape = [2, 2]\n\n indices = [[0,0],[1,1]] # indices_shape = [2, 2]\n\n output = [0,3] # output_shape = [2]\n\n`Example 2`\n\n batch_dims = 0\n\n data = [[0,1],[2,3]] # data_shape = [2, 2]\n\n indices = [[1],[0]] # indices_shape = [2, 1]\n\n output = [[2,3],[0,1]] # output_shape = [2, 2]\n\n`Example 3`\n\n batch_dims = 0\n\n data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]\n\n indices = [[0,1],[1,0]] # indices_shape = [2, 2]\n\n output = [[2,3],[4,5]] # output_shape = [2, 2]\n\n`Example 4`\n\n batch_dims = 0\n\n data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]\n\n indices = [[[0,1]],[[1,0]]] # indices_shape = [2, 1, 2]\n\n output = [[[2,3]],[[4,5]]] # output_shape = [2, 1, 2]\n\n`Example 5`\n\n batch_dims = 1\n\n data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]\n\n indices = [[1],[0]] # indices_shape = [2, 1]\n\n output = [[2,3],[4,5]] # output_shape = [2, 2]\n\n\n",
+ "description": "\nGiven `data` tensor of rank `r` >= 1, `indices` tensor of rank `q` >= 1, and `batch_dims` integer `b`, this operator gathers\nslices of `data` into an output tensor of rank `q + r - indices_shape[-1] - 1 - b`.\n\n`indices` is an q-dimensional integer tensor, best thought of as a `(q-1)`-dimensional tensor of index-tuples into `data`,\nwhere each element defines a slice of `data`\n\n`batch_dims` (denoted as `b`) is an integer indicating the number of batch dimensions, i.e the leading `b` number of dimensions of\n`data` tensor and `indices` are representing the batches, and the gather starts from the `b+1` dimension.\n\nSome salient points about the inputs' rank and shape:\n\n1) r >= 1 and q >= 1 are to be honored. There is no dependency condition to be met between ranks `r` and `q`\n\n2) The first `b` dimensions of the shape of `indices` tensor and `data` tensor must be equal.\n\n3) b < min(q, r) is to be honored.\n\n4) The `indices_shape[-1]` should have a value between 1 (inclusive) and rank `r-b` (inclusive)\n\n5) All values in `indices` are expected to be within bounds [-s, s-1] along axis of size `s` (i.e.) `-data_shape[i] <= indices[...,i] <= data_shape[i] - 1`.\n It is an error if any of the index values are out of bounds.\n\nThe output is computed as follows:\n\nThe output tensor is obtained by mapping each index-tuple in the `indices` tensor to the corresponding slice of the input `data`.\n\n1) If `indices_shape[-1] > r-b` => error condition\n\n2) If `indices_shape[-1] == r-b`, since the rank of `indices` is `q`, `indices` can be thought of as `N` `(q-b-1)`-dimensional tensors\n containing 1-D tensors of dimension `r-b`, where `N` is an integer equals to the product of 1 and all the elements in the batch dimensions\n of the indices_shape. Let us think of each such `r-b` ranked tensor as `indices_slice`. Each *scalar value* corresponding to `data[0:b-1,indices_slice]`\n is filled into the corresponding location of the `(q-b-1)`-dimensional tensor to form the `output` tensor (Example 1 below)\n\n3) If `indices_shape[-1] < r-b`, since the rank of `indices` is `q`, `indices` can be thought of as `N` `(q-b-1)`-dimensional tensor\n containing 1-D tensors of dimension `< r-b`. Let us think of each such tensors as `indices_slice`. Each *tensor slice* corresponding\n to `data[0:b-1, indices_slice , :]` is filled into the corresponding location of the `(q-b-1)`-dimensional tensor\n to form the `output` tensor (Examples 2, 3, 4 and 5 below)\n\nThis operator is the inverse of `ScatterND`.\n\n**Example 1**\n\n```\nbatch_dims = 0\ndata = [[0,1],[2,3]] # data_shape = [2, 2]\nindices = [[0,0],[1,1]] # indices_shape = [2, 2]\noutput = [0,3] # output_shape = [2]\n```\n\n**Example 2**\n\n```\nbatch_dims = 0\ndata = [[0,1],[2,3]] # data_shape = [2, 2]\nindices = [[1],[0]] # indices_shape = [2, 1]\noutput = [[2,3],[0,1]] # output_shape = [2, 2]\n```\n\n**Example 3**\n\n```\nbatch_dims = 0\ndata = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]\nindices = [[0,1],[1,0]] # indices_shape = [2, 2]\noutput = [[2,3],[4,5]] # output_shape = [2, 2]\n```\n\n**Example 4**\n\n```\nbatch_dims = 0\ndata = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]\nindices = [[[0,1]],[[1,0]]] # indices_shape = [2, 1, 2]\noutput = [[[2,3]],[[4,5]]] # output_shape = [2, 1, 2]\n```\n\n**Example 5**\n\n```\nbatch_dims = 1\ndata = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]\nindices = [[1],[0]] # indices_shape = [2, 1]\noutput = [[2,3],[4,5]] # output_shape = [2, 2]\n```\n",
"arguments": [
"data",
"indices"
@@ -687,7 +687,7 @@
"expression_string": "onnx_ops.max(data_0)"
},
"onnx::MaxPool": {
- "description": "\n MaxPool consumes an input tensor X and applies max pooling across\n the tensor according to kernel sizes, stride sizes, and pad lengths.\n max pooling consisting of computing the max on all values of a\n subset of the input tensor according to the kernel size and downsampling the\n data into the output tensor Y for further processing. The output spatial shape will be following:\n ```\n output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1)) / strides_spatial_shape[i] + 1)\n ```\n or\n ```\n output_spatial_shape[i] = ceil((input_spatial_shape[i] + pad_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1)) / strides_spatial_shape[i] + 1)\n ```\n if ceil_mode is enabled `pad_shape[i]` is the sum of pads along axis `i`.\n\n `auto_pad` is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following:\n ```\n VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) + 1) / strides_spatial_shape[i])\n SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])\n ```\n And pad shape will be following if `SAME_UPPER` or `SAME_LOWER`:\n ```\n pad_shape[i] = (output_spatial_shape[i] - 1) * strides_spatial_shape[i] + ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) - input_spatial_shape[i]\n ```\n The output of each pooling window is maximum number of elements exclude pad. \n ",
+ "description": "\n MaxPool consumes an input tensor X and applies max pooling across\n the tensor according to kernel sizes, stride sizes, and pad lengths.\n max pooling consisting of computing the max on all values of a\n subset of the input tensor according to the kernel size and downsampling the\n data into the output tensor Y for further processing. The output spatial shape is calculated differently\n depending on whether explicit padding is used, where pads is employed, or auto padding is used, where auto_pad is utilized.\n With explicit padding (https://pytorch.org/docs/stable/generated/torch.nn.MaxPool2d.html?highlight=maxpool#torch.nn.MaxPool2d):\n ```\n output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i] - dilation[i] * (kernel_shape[i] - 1) - 1) / strides_spatial_shape[i] + 1)\n ```\n or\n ```\n output_spatial_shape[i] = ceil((input_spatial_shape[i] + pad_shape[i] - dilation[i] * (kernel_shape[i] - 1) - 1) / strides_spatial_shape[i] + 1)\n ```\n if ceil_mode is enabled. `pad_shape[i]` is the sum of pads along axis `i`.\n\n `auto_pad` is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following when ceil_mode is enabled:\n ```\n VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) + 1) / strides_spatial_shape[i])\n SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])\n ```\n or when ceil_mode is disabled (https://www.tensorflow.org/api_docs/python/tf/keras/layers/AveragePooling2D):\n ```\n VALID: output_spatial_shape[i] = floor((input_spatial_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1)) / strides_spatial_shape[i]) + 1\n SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = floor((input_spatial_shape[i] - 1) / strides_spatial_shape[i]) + 1\n ```\n And pad shape will be following if `SAME_UPPER` or `SAME_LOWER`:\n ```\n pad_shape[i] = (output_spatial_shape[i] - 1) * strides_spatial_shape[i] + ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) - input_spatial_shape[i]\n ```\n The output of each pooling window is maximum number of elements exclude pad. \n ",
"arguments": [
"X"
],
@@ -702,7 +702,7 @@
"expression_string": "onnx_ops.maxroipool(X, rois, pooled_shape, spatial_scale)"
},
"onnx::MaxUnpool": {
- "description": "\nMaxUnpool essentially computes the partial inverse of the MaxPool op.\n The input information to this op is typically the output information from a MaxPool op. The first\n input tensor X is the tensor that needs to be unpooled, which is typically the pooled tensor (first output)\n from MaxPool. The second input tensor, I, contains the indices to the (locally maximal) elements corrsponding\n to the elements in the first input tensor X. Input tensor I is typically the second output of the MaxPool op.\n The third (optional) input is a tensor that specifies the output size of the unpooling operation.\n\nMaxUnpool is intended to do 'partial' inverse of the MaxPool op. 'Partial' because all the non-maximal\n values from the original input to MaxPool are set to zero in the output of the MaxUnpool op. Pooling\n the result of an unpooling operation should give back the original input to the unpooling op.\n\nMaxUnpool can produce the same output size for several input sizes, which makes unpooling op ambiguous.\n The third input argument, output_size, is meant to disambiguate the op and produce output tensor of\n known/predictable size.\n\nIn addition to the inputs, MaxUnpool takes three attributes, namely kernel_shape, strides, and pads,\n which define the exact unpooling op. The attributes typically have the same values as the corrsponding\n pooling op that the unpooling op is trying to invert.\n",
+ "description": "\nMaxUnpool essentially computes the partial inverse of the MaxPool op.\n The input information to this op is typically the output information from a MaxPool op. The first\n input tensor X is the tensor that needs to be unpooled, which is typically the pooled tensor (first output)\n from MaxPool. The second input tensor, I, contains the indices to the (locally maximal) elements corresponding\n to the elements in the first input tensor X. Input tensor I is typically the second output of the MaxPool op.\n The third (optional) input is a tensor that specifies the output size of the unpooling operation.\n\nMaxUnpool is intended to do 'partial' inverse of the MaxPool op. 'Partial' because all the non-maximal\n values from the original input to MaxPool are set to zero in the output of the MaxUnpool op. Pooling\n the result of an unpooling operation should give back the original input to the unpooling op.\n\nMaxUnpool can produce the same output size for several input sizes, which makes unpooling op ambiguous.\n The third input argument, output_size, is meant to disambiguate the op and produce output tensor of\n known/predictable size.\n\nIn addition to the inputs, MaxUnpool takes three attributes, namely kernel_shape, strides, and pads,\n which define the exact unpooling op. The attributes typically have the same values as the corresponding\n pooling op that the unpooling op is trying to invert.\n",
"arguments": [
"X",
"I",
@@ -951,63 +951,63 @@
"expression_string": "onnx_ops.reciprocal(X)"
},
"onnx::ReduceL1": {
- "description": "\nComputes the L1 norm of the input tensor's elements along the provided axes. The resulting\ntensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then\nthe resulting tensor has the reduced dimension pruned. Input tensors of rank zero are\nvalid.\n\nThe above behavior is similar to numpy, with the exception that numpy defaults keepdims to\nFalse instead of True.",
+ "description": "\nComputes the L1 norm of the input tensor's elements along the provided axes. The resulting\ntensor has the same rank as the input if `keepdims` equals 1. If `keepdims` equals 0, then\nthe resulting tensor has the reduced dimension pruned. Input tensors of rank zero are\nvalid. Reduction over an empty set of values yields 0.\n\n\nThe above behavior is similar to numpy, with the exception that numpy defaults `keepdims`\nto `False` instead of `True`.",
"arguments": [
"data"
],
"expression_string": "onnx_ops.reducel1(data, axes, keepdims)"
},
"onnx::ReduceL2": {
- "description": "\nComputes the L2 norm of the input tensor's elements along the provided axes. The resulting\ntensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then\nthe resulting tensor has the reduced dimension pruned. Input tensors of rank zero are\nvalid.\n\nThe above behavior is similar to numpy, with the exception that numpy defaults keepdims to\nFalse instead of True.",
+ "description": "\nComputes the L2 norm of the input tensor's elements along the provided axes. The resulting\ntensor has the same rank as the input if `keepdims` equals 1. If `keepdims` equals 0, then\nthe resulting tensor has the reduced dimension pruned. Input tensors of rank zero are\nvalid. Reduction over an empty set of values yields 0.\n\n\nThe above behavior is similar to numpy, with the exception that numpy defaults `keepdims`\nto `False` instead of `True`.",
"arguments": [
"data"
],
"expression_string": "onnx_ops.reducel2(data, axes, keepdims)"
},
"onnx::ReduceLogSum": {
- "description": "\nComputes the log sum of the input tensor's elements along the provided axes. The resulting\ntensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then\nthe resulting tensor has the reduced dimension pruned. Input tensors of rank zero are\nvalid.\n\nThe above behavior is similar to numpy, with the exception that numpy defaults keepdims to\nFalse instead of True.",
+ "description": "\nComputes the log sum of the input tensor's elements along the provided axes. The resulting\ntensor has the same rank as the input if `keepdims` equals 1. If `keepdims` equals 0, then\nthe resulting tensor has the reduced dimension pruned. Input tensors of rank zero are\nvalid. Reduction over an empty set of values yields minus infinity (if supported by the datatype) or undefined otherwise.\n\n\nThe above behavior is similar to numpy, with the exception that numpy defaults `keepdims`\nto `False` instead of `True`.",
"arguments": [
"data"
],
"expression_string": "onnx_ops.reducelogsum(data, axes, keepdims)"
},
"onnx::ReduceLogSumExp": {
- "description": "\nComputes the log sum exponent of the input tensor's elements along the provided axes. The resulting\ntensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then\nthe resulting tensor has the reduced dimension pruned. Input tensors of rank zero are\nvalid.\n\nThe above behavior is similar to numpy, with the exception that numpy defaults keepdims to\nFalse instead of True.",
+ "description": "\nComputes the log sum exponent of the input tensor's elements along the provided axes. The resulting\ntensor has the same rank as the input if `keepdims` equals 1. If `keepdims` equals 0, then\nthe resulting tensor has the reduced dimension pruned. Input tensors of rank zero are\nvalid. Reduction over an empty set of values yields minus infinity (if supported by the datatype) or undefined otherwise.\n\n\nThe above behavior is similar to numpy, with the exception that numpy defaults `keepdims`\nto `False` instead of `True`.",
"arguments": [
"data"
],
"expression_string": "onnx_ops.reducelogsumexp(data, axes, keepdims)"
},
"onnx::ReduceMax": {
- "description": "\nComputes the max of the input tensor's elements along the provided axes. The resulting\ntensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then\nthe resulting tensor has the reduced dimension pruned. Input tensors of rank zero are\nvalid.\n\nThe above behavior is similar to numpy, with the exception that numpy defaults keepdims to\nFalse instead of True.",
+ "description": "\nComputes the max of the input tensor's elements along the provided axes. The resulting\ntensor has the same rank as the input if `keepdims` equals 1. If `keepdims` equals 0, then\nthe resulting tensor has the reduced dimension pruned. Input tensors of rank zero are\nvalid. Reduction over an empty set of values yields minus infinity (if supported by the datatype) or the minimum value of the data type otherwise.\n\n\nThe above behavior is similar to numpy, with the exception that numpy defaults `keepdims`\nto `False` instead of `True`.",
"arguments": [
"data"
],
"expression_string": "onnx_ops.reducemax(data, axes, keepdims)"
},
"onnx::ReduceMean": {
- "description": "\nComputes the mean of the input tensor's elements along the provided axes. The resulting\ntensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then\nthe resulting tensor has the reduced dimension pruned. Input tensors of rank zero are\nvalid.\n\nThe above behavior is similar to numpy, with the exception that numpy defaults keepdims to\nFalse instead of True.",
+ "description": "\nComputes the mean of the input tensor's elements along the provided axes. The resulting\ntensor has the same rank as the input if `keepdims` equals 1. If `keepdims` equals 0, then\nthe resulting tensor has the reduced dimension pruned. Input tensors of rank zero are\nvalid. Reduction over an empty set of values yields undefined.\n\n\nThe above behavior is similar to numpy, with the exception that numpy defaults `keepdims`\nto `False` instead of `True`.",
"arguments": [
"data"
],
"expression_string": "onnx_ops.reducemean(data, axes, keepdims)"
},
"onnx::ReduceMin": {
- "description": "\nComputes the min of the input tensor's elements along the provided axes. The resulting\ntensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then\nthe resulting tensor has the reduced dimension pruned. Input tensors of rank zero are\nvalid.\n\nThe above behavior is similar to numpy, with the exception that numpy defaults keepdims to\nFalse instead of True.",
+ "description": "\nComputes the min of the input tensor's elements along the provided axes. The resulting\ntensor has the same rank as the input if `keepdims` equals 1. If `keepdims` equals 0, then\nthe resulting tensor has the reduced dimension pruned. Input tensors of rank zero are\nvalid. Reduction over an empty set of values yields plus infinity (if supported by the datatype) or the maximum value of the data type otherwise.\n\n\nThe above behavior is similar to numpy, with the exception that numpy defaults `keepdims`\nto `False` instead of `True`.",
"arguments": [
"data"
],
"expression_string": "onnx_ops.reducemin(data, axes, keepdims)"
},
"onnx::ReduceProd": {
- "description": "\nComputes the product of the input tensor's elements along the provided axes. The resulting\ntensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then\nthe resulting tensor has the reduced dimension pruned. Input tensors of rank zero are\nvalid.\n\nThe above behavior is similar to numpy, with the exception that numpy defaults keepdims to\nFalse instead of True.",
+ "description": "\nComputes the product of the input tensor's elements along the provided axes. The resulting\ntensor has the same rank as the input if `keepdims` equals 1. If `keepdims` equals 0, then\nthe resulting tensor has the reduced dimension pruned. Input tensors of rank zero are\nvalid. Reduction over an empty set of values yields 1.\n\n\nThe above behavior is similar to numpy, with the exception that numpy defaults `keepdims`\nto `False` instead of `True`.",
"arguments": [
"data"
],
"expression_string": "onnx_ops.reduceprod(data, axes, keepdims)"
},
"onnx::ReduceSum": {
- "description": "\nComputes the sum of the input tensor's elements along the provided axes. The resulting\ntensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then\nthe resulting tensor has the reduced dimension pruned. Input tensors of rank zero are\nvalid.\n\nThe above behavior is similar to numpy, with the exception that numpy defaults keepdims to\nFalse instead of True.",
+ "description": "\nComputes the sum of the input tensor's elements along the provided axes. The resulting\ntensor has the same rank as the input if `keepdims` equals 1. If `keepdims` equals 0, then\nthe resulting tensor has the reduced dimension pruned. Input tensors of rank zero are\nvalid. Reduction over an empty set of values yields 0.\n\n\nThe above behavior is similar to numpy, with the exception that numpy defaults `keepdims`\nto `False` instead of `True`.",
"arguments": [
"data",
"axes"
@@ -1015,7 +1015,7 @@
"expression_string": "onnx_ops.reducesum(data, axes, keepdims, noop_with_empty_axes)"
},
"onnx::ReduceSumSquare": {
- "description": "\nComputes the sum square of the input tensor's elements along the provided axes. The resulting\ntensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then\nthe resulting tensor has the reduced dimension pruned. Input tensors of rank zero are\nvalid.\n\nThe above behavior is similar to numpy, with the exception that numpy defaults keepdims to\nFalse instead of True.",
+ "description": "\nComputes the sum square of the input tensor's elements along the provided axes. The resulting\ntensor has the same rank as the input if `keepdims` equals 1. If `keepdims` equals 0, then\nthe resulting tensor has the reduced dimension pruned. Input tensors of rank zero are\nvalid. Reduction over an empty set of values yields 0.\n\n\nThe above behavior is similar to numpy, with the exception that numpy defaults `keepdims`\nto `False` instead of `True`.",
"arguments": [
"data"
],
@@ -1064,7 +1064,7 @@
"expression_string": "onnx_ops.roialign(X, rois, batch_indices, mode, output_height, output_width, sampling_ratio, spatial_scale)"
},
"onnx::Round": {
- "description": "\nRound takes one input Tensor and rounds the values, element-wise, meaning\nit finds the nearest integer for each value.\nIn case of halfs, the rule is to round them to the nearest even integer.\nIf input x is integral, +0, -0, NaN, or infinite, x itself is returned.\nThe output tensor has the same shape and type as the input.\n\nExamples:\n```\nround([0.9]) = [1.0]\nround([2.5]) = [2.0]\nround([2.3]) = [2.0]\nround([1.5]) = [2.0]\nround([-4.5]) = [-4.0]\n```\n",
+ "description": "\nRound takes one input Tensor and rounds the values, element-wise, meaning\nit finds the nearest integer for each value.\nIn case of halves, the rule is to round them to the nearest even integer.\nIf input x is integral, +0, -0, NaN, or infinite, x itself is returned.\nThe output tensor has the same shape and type as the input.\n\nExamples:\n```\nround([0.9]) = [1.0]\nround([2.5]) = [2.0]\nround([2.3]) = [2.0]\nround([1.5]) = [2.0]\nround([-4.5]) = [-4.0]\n```\n",
"arguments": [
"X"
],
@@ -1198,7 +1198,7 @@
"expression_string": "onnx_ops.size(data)"
},
"onnx::Slice": {
- "description": "\nProduces a slice of the input tensor along multiple axes. Similar to numpy:\nhttps://numpy.org/doc/stable/user/basics.indexing.html?highlight=slice#slicing-and-striding\n\nSlice uses the `starts`, `ends`, `axes` and `steps` inputs to select a sub-tensor\nof its input `data` tensor.\n\nAn effective `start[i]`, `end[i]`, and `step[i]` must be computed for each `i`\nin `[0, ... r-1]` where `r = rank(input)` as follows:\n\nIf `axes` are omitted, they are set to `[0, ..., r-1]`.\nIf `steps` are omitted, they are set to `[1, ..., 1]` of length `len(starts)`\n\nThe effective values are initialized as `start[i] = 0`, `end[i] = dims[i]` where\n`dims` are the dimensions of `input` and `step[i] = `1.\n\nAll negative elements of `axes` are made non-negatve by adding `r` to them, where\n`r =rank(input)`.\n\nAll negative values in `starts[i]` and `ends[i]` have `dims[axes[i]]` added to them,\nwhere `dims` are the dimensions of `input`. Then `start[axes[i]]` is the adjusted\n`starts[i]` is clamped into the range `[0, dims[axes[i]]]` for positive stepping\nand `[0, dims[axes[i]]-1]` for negative stepping.\n\nThe clamping for the adjusted `ends[i]` depends on the sign of `steps[i]` and must\naccommodate copying 0 through `dims[axes[i]]` elements, so for positive stepping\n`end[axes[i]]` is clamped to `[0, dims[axes[i]]]`, while for negative stepping it\nis clamped to `[-1, dims[axes[i]]-1]`.\n\nFinally, `step[axes[i]] = steps[i]`.\n\nFor slicing to the end of a dimension with unknown size, it is recommended to pass\nin `INT_MAX` when slicing forward and 'INT_MIN' when slicing backward.\n\nExample 1:\n\n```\ndata = [\n [1, 2, 3, 4],\n [5, 6, 7, 8],\n]\naxes = [0, 1]\nstarts = [1, 0]\nends = [2, 3]\nsteps = [1, 2]\nresult = [\n [5, 7],\n]\n```\n\nExample 2:\n\n```\ndata = [\n [1, 2, 3, 4],\n [5, 6, 7, 8],\n]\nstarts = [0, 1]\nends = [-1, 1000]\nresult = [\n [2, 3, 4],\n]\n```\n",
+ "description": "\nProduces a slice of the input tensor along multiple axes. Similar to numpy:\nhttps://numpy.org/doc/stable/user/basics.indexing.html?highlight=slice#slicing-and-striding\n\nSlice uses the `starts`, `ends`, `axes` and `steps` inputs to select a sub-tensor\nof its input `data` tensor.\n\nAn effective `starts[i]`, `ends[i]`, and `steps[i]` must be computed for each `i`\nin `[0, ... r-1]` where `r = rank(input)` as follows:\n\nIf `axes` are omitted, they are set to `[0, ..., r-1]`.\nIf `steps` are omitted, they are set to `[1, ..., 1]` of length `len(starts)`\n\nThe effective values are initialized as `start[i] = 0`, `ends[i] = dims[i]` where\n`dims` are the dimensions of `input` and `steps[i] = 1`.\n\nAll negative elements of `axes` are made non-negative by adding `r` to them, where\n`r =rank(input)`.\n\nAll negative values in `starts[i]` and `ends[i]` have `dims[axes[i]]` added to them,\nwhere `dims` are the dimensions of `input`. Then `start[axes[i]]` is the adjusted\n`starts[i]` is clamped into the range `[0, dims[axes[i]]]` for positive stepping\nand `[0, dims[axes[i]]-1]` for negative stepping.\n\nThe clamping for the adjusted `ends[i]` depends on the sign of `steps[i]` and must\naccommodate copying 0 through `dims[axes[i]]` elements, so for positive stepping\n`ends[axes[i]]` is clamped to `[0, dims[axes[i]]]`, while for negative stepping it\nis clamped to `[-1, dims[axes[i]]-1]`.\n\nFinally, `steps[axes[i]] = steps[i]`.\n\nFor slicing to the end of a dimension with unknown size, it is recommended to pass\nin `INT_MAX` when slicing forward and 'INT_MIN' when slicing backward.\n\nExample 1:\n\n```\ndata = [\n [1, 2, 3, 4],\n [5, 6, 7, 8],\n]\naxes = [0, 1]\nstarts = [1, 0]\nends = [2, 3]\nsteps = [1, 2]\nresult = [\n [5, 7],\n]\n```\n\nExample 2:\n\n```\ndata = [\n [1, 2, 3, 4],\n [5, 6, 7, 8],\n]\nstarts = [0, 1]\nends = [-1, 1000]\nresult = [\n [2, 3, 4],\n]\n```\n",
"arguments": [
"data",
"starts",
@@ -1216,7 +1216,7 @@
"expression_string": "onnx_ops.softmax(input, axis)"
},
"onnx::SoftmaxCrossEntropyLoss": {
- "description": "Loss function that measures the softmax cross entropy\nbetween 'scores' and 'labels'.\nThis operator first computes a loss tensor whose shape is identical to the labels input.\nIf the input is 2-D with shape (N, C), the loss tensor may be a N-element vector L = (l_1, l_2, ..., l_N).\nIf the input is N-D tensor with shape (N, C, D1, D2, ..., Dk),\nthe loss tensor L may have (N, D1, D2, ..., Dk) as its shape and L[i,][j_1][j_2]...[j_k] denotes a scalar element in L.\nAfter L is available, this operator can optionally do a reduction operator.\n\n* shape(scores): (N, C) where C is the number of classes, or (N, C, D1, D2,..., Dk),\n with K >= 1 in case of K-dimensional loss.\n* shape(labels): (N) where each value is 0 <= labels[i] <= C-1, or (N, D1, D2,..., Dk),\n with K >= 1 in case of K-dimensional loss.\n\nThe loss for one sample, l_i, can caculated as follows:\n```\nl[i][d1][d2]...[dk] = -y[i][c][d1][d2]..[dk], where i is the index of classes.\n```\nor\n```\nl[i][d1][d2]...[dk] = -y[i][c][d1][d2]..[dk] * weights[c], if 'weights' is provided.\n```\n\nloss is zero for the case when label-value equals ignore_index.\n```\nl[i][d1][d2]...[dk] = 0, when labels[n][d1][d2]...[dk] = ignore_index\n```\n\nwhere:\n```\np = Softmax(scores)\ny = Log(p)\nc = labels[i][d1][d2]...[dk]\n```\n\nFinally, L is optionally reduced:\n\n* If reduction = 'none', the output is L with shape (N, D1, D2, ..., Dk).\n* If reduction = 'sum', the output is scalar: Sum(L).\n* If reduction = 'mean', the output is scalar: ReduceMean(L), or if weight is provided: `ReduceSum(L) / ReduceSum(W)`,\n where tensor W is of shape `(N, D1, D2, ..., Dk)` and `W[n][d1][d2]...[dk] = weights[labels[i][d1][d2]...[dk]]`.\n",
+ "description": "Loss function that measures the softmax cross entropy\nbetween 'scores' and 'labels'.\nThis operator first computes a loss tensor whose shape is identical to the labels input.\nIf the input is 2-D with shape (N, C), the loss tensor may be a N-element vector L = (l_1, l_2, ..., l_N).\nIf the input is N-D tensor with shape (N, C, D1, D2, ..., Dk),\nthe loss tensor L may have (N, D1, D2, ..., Dk) as its shape and L[i,][j_1][j_2]...[j_k] denotes a scalar element in L.\nAfter L is available, this operator can optionally do a reduction operator.\n\n* shape(scores): (N, C) where C is the number of classes, or (N, C, D1, D2,..., Dk),\n with K >= 1 in case of K-dimensional loss.\n* shape(labels): (N) where each value is 0 <= labels[i] <= C-1, or (N, D1, D2,..., Dk),\n with K >= 1 in case of K-dimensional loss.\n\nThe loss for one sample, l_i, can calculated as follows:\n```\nl[i][d1][d2]...[dk] = -y[i][c][d1][d2]..[dk], where i is the index of classes.\n```\nor\n```\nl[i][d1][d2]...[dk] = -y[i][c][d1][d2]..[dk] * weights[c], if 'weights' is provided.\n```\n\nloss is zero for the case when label-value equals ignore_index.\n```\nl[i][d1][d2]...[dk] = 0, when labels[n][d1][d2]...[dk] = ignore_index\n```\n\nwhere:\n```\np = Softmax(scores)\ny = Log(p)\nc = labels[i][d1][d2]...[dk]\n```\n\nFinally, L is optionally reduced:\n\n* If reduction = 'none', the output is L with shape (N, D1, D2, ..., Dk).\n* If reduction = 'sum', the output is scalar: Sum(L).\n* If reduction = 'mean', the output is scalar: ReduceMean(L), or if weight is provided: `ReduceSum(L) / ReduceSum(W)`,\n where tensor W is of shape `(N, D1, D2, ..., Dk)` and `W[n][d1][d2]...[dk] = weights[labels[i][d1][d2]...[dk]]`.\n",
"arguments": [
"scores",
"labels",
@@ -1358,7 +1358,7 @@
"expression_string": "onnx_ops.trilu(input, k, upper)"
},
"onnx::Unique": {
- "description": "\nFind the unique elements of a tensor. When an optional attribute 'axis' is provided, unique subtensors sliced along the 'axis' are returned.\nOtherwise the input tensor is flattened and unique values of the flattened tensor are returned.\n\nThis operator returns the unique values or sliced unique subtensors of the input tensor and three optional outputs.\nThe first output tensor 'Y' contains all unique values or subtensors of the input.\nThe second optional output tensor 'indices' contains indices of 'Y' elements' first occurance in 'X'..\nThe third optional output tensor 'inverse_indices' contains, for elements of 'X', its corresponding indices in 'Y'. \".\nThe fourth optional output tensor 'counts' contains the count of each element of 'Y' in the input.\n\nOutputs are either sorted in ascending order or optionally in the order of the first occurrence of the values in the input.\n\nhttps://docs.scipy.org/doc/numpy/reference/generated/numpy.unique.html\n\nExample 1:\n```\ninput_X = [2, 1, 1, 3, 4, 3]\nattribute_sorted = 0\nattribute_axis = None\noutput_Y = [2, 1, 3, 4]\noutput_indices = [0, 1, 3, 4]\noutput_inverse_indices = [0, 1, 1, 2, 3, 2]\noutput_counts = [1, 2, 2, 1]\n```\n\nExample 2:\n```\ninput_X = [[1, 3], [2, 3]]\nattribute_sorted = 1\nattribute_axis = None\noutput_Y = [1, 2, 3]\noutput_indices = [0, 2, 1]\noutput_inverse_indices = [0, 2, 1, 2]\noutput_counts = [1, 1, 2]\n```\n\nExample 3:\n```\ninput_X = [[1, 0, 0], [1, 0, 0], [2, 3, 4]]\nattribute_sorted = 1\nattribute_axis = 0\noutput_Y = [[1, 0, 0], [2, 3, 4]]\noutput_indices = [0, 2]\noutput_inverse_indices = [0, 0, 1]\noutput_counts = [2, 1]\n```\n\nExample 4:\n```\ninput_x = [[[1., 1.], [0., 1.], [2., 1.], [0., 1.]],\n [[1., 1.], [0., 1.], [2., 1.], [0., 1.]]]\nattribute_sorted = 1\nattribute_axis = 1\n```\n\nintermediate data are presented below for better understanding:\nthere are 4 subtensors sliced along axis 1 of input_x (shape = (2, 4, 2)):\n```\nA: [[1, 1], [1, 1]],\n [[0, 1], [0, 1]],\n [[2, 1], [2, 1]],\n [[0, 1], [0, 1]].\n```\n\nthere are 3 unique subtensors:\n```\n[[1, 1], [1, 1]],\n[[0, 1], [0, 1]],\n[[2, 1], [2, 1]].\n```\n\nsorted unique subtensors:\n```\nB: [[0, 1], [0, 1]],\n [[1, 1], [1, 1]],\n [[2, 1], [2, 1]].\n```\n\noutput_Y is constructed from B:\n```\n[[[0. 1.], [1. 1.], [2. 1.]],\n [[0. 1.], [1. 1.], [2. 1.]]]\n```\n\noutput_indices is to map from B to A:\n```\n[1, 0, 2]\n```\n\noutput_inverse_indices is to map from A to B:\n```\n[1, 0, 2, 0]\n```\n\noutput_counts:\n```\n[2, 1, 1]\n```\n",
+ "description": "\nFind the unique elements of a tensor. When an optional attribute 'axis' is provided, unique subtensors sliced along the 'axis' are returned.\nOtherwise the input tensor is flattened and unique values of the flattened tensor are returned.\n\nThis operator returns the unique values or sliced unique subtensors of the input tensor and three optional outputs.\nThe first output tensor 'Y' contains all unique values or subtensors of the input.\nThe second optional output tensor 'indices' contains indices of 'Y' elements' first occurrence in 'X'.\nThe third optional output tensor 'inverse_indices' contains, for elements of 'X', its corresponding indices in 'Y'.\nThe fourth optional output tensor 'counts' contains the count of each element of 'Y' in the input.\n\nOutputs are either sorted in ascending order or optionally in the order of the first occurrence of the values in the input.\n\nhttps://docs.scipy.org/doc/numpy/reference/generated/numpy.unique.html\n\nExample 1:\n```\ninput_X = [2, 1, 1, 3, 4, 3]\nattribute_sorted = 0\nattribute_axis = None\noutput_Y = [2, 1, 3, 4]\noutput_indices = [0, 1, 3, 4]\noutput_inverse_indices = [0, 1, 1, 2, 3, 2]\noutput_counts = [1, 2, 2, 1]\n```\n\nExample 2:\n```\ninput_X = [[1, 3], [2, 3]]\nattribute_sorted = 1\nattribute_axis = None\noutput_Y = [1, 2, 3]\noutput_indices = [0, 2, 1]\noutput_inverse_indices = [0, 2, 1, 2]\noutput_counts = [1, 1, 2]\n```\n\nExample 3:\n```\ninput_X = [[1, 0, 0], [1, 0, 0], [2, 3, 4]]\nattribute_sorted = 1\nattribute_axis = 0\noutput_Y = [[1, 0, 0], [2, 3, 4]]\noutput_indices = [0, 2]\noutput_inverse_indices = [0, 0, 1]\noutput_counts = [2, 1]\n```\n\nExample 4:\n```\ninput_x = [[[1., 1.], [0., 1.], [2., 1.], [0., 1.]],\n [[1., 1.], [0., 1.], [2., 1.], [0., 1.]]]\nattribute_sorted = 1\nattribute_axis = 1\n```\n\nintermediate data are presented below for better understanding:\nthere are 4 subtensors sliced along axis 1 of input_x (shape = (2, 4, 2)):\n```\nA: [[1, 1], [1, 1]],\n [[0, 1], [0, 1]],\n [[2, 1], [2, 1]],\n [[0, 1], [0, 1]].\n```\n\nthere are 3 unique subtensors:\n```\n[[1, 1], [1, 1]],\n[[0, 1], [0, 1]],\n[[2, 1], [2, 1]].\n```\n\nsorted unique subtensors:\n```\nB: [[0, 1], [0, 1]],\n [[1, 1], [1, 1]],\n [[2, 1], [2, 1]].\n```\n\noutput_Y is constructed from B:\n```\n[[[0. 1.], [1. 1.], [2. 1.]],\n [[0. 1.], [1. 1.], [2. 1.]]]\n```\n\noutput_indices is to map from B to A:\n```\n[1, 0, 2]\n```\n\noutput_inverse_indices is to map from A to B:\n```\n[1, 0, 2, 0]\n```\n\noutput_counts:\n```\n[2, 1, 1]\n```\n",
"arguments": [
"X"
],
diff --git a/docs/MDF_function_specifications.md b/docs/MDF_function_specifications.md
index db58d62b..69848e9c 100644
--- a/docs/MDF_function_specifications.md
+++ b/docs/MDF_function_specifications.md
@@ -306,7 +306,7 @@ Python version: `actr.match_production(production,context)`
## Abs
Absolute takes one input data (Tensor) and produces one output data
-(Tensor) where the absolute is, y = abs(x), is applied to
+(Tensor) where absolute value, y = abs(x), is applied to
the tensor elementwise.
@@ -452,11 +452,17 @@ Python version: `onnx_ops.anumpy.tanh(input)`
* pad_shape[i] is sum of pads along axis i
```
- `auto_pad` is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following:
+ `auto_pad` is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following when ceil_mode is enabled:
```
VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) + 1) / strides_spatial_shape[i])
SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])
```
+or when ceil_mode is disabled:
+ ```
+ VALID: output_spatial_shape[i] = floor((input_spatial_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) + 1) / strides_spatial_shape[i])
+ SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = floor(input_spatial_shape[i] / strides_spatial_shape[i])
+ ```
+
And pad shape will be following if `SAME_UPPER` or `SAME_LOWER`:
```
pad_shape[i] = (output_spatial_shape[i] - 1) * strides_spatial_shape[i] + ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) - input_spatial_shape[i]
@@ -889,14 +895,14 @@ Python version: `onnx_ops.dropout(data, ratio, training_mode, seed)`
## DynamicQuantizeLinear
-A Function to fuse calculation for Scale, Zero Point and FP32->8Bit convertion of FP32 Input data.
+A Function to fuse calculation for Scale, Zero Point and FP32->8Bit conversion of FP32 Input data.
Outputs Scale, ZeroPoint and Quantized Input for a given FP32 Input.
Scale is calculated as:
```
-y_scale = (max(x) - min(x))/(qmax - qmin)
+y_scale = (maximum(0, max(x)) - minimum(0, min(x))) / (qmax - qmin)
```
-* where qmax and qmin are max and min values for quantization range .i.e [0, 255] in case of uint8
+* where qmax and qmin are max and min values for quantization range i.e. [0, 255] in case of uint8
* data range is adjusted to include 0.
Zero point is calculated as:
@@ -928,7 +934,7 @@ Python version: `onnx_ops.dynamicquantizelinear(x)`
An einsum of the form `term1, term2 -> output-term` produces an output tensor using the following equation
```
-output[output-term] = reduce-sum( input1[term1] * input2[term] )
+output[output-term] = reduce-sum( input1[term1] * input2[term2] )
```
where the reduce-sum performs a summation over all the indices occurring in the input terms (term1, term2)
@@ -1276,57 +1282,50 @@ The output tensor is obtained by mapping each index-tuple in the `indices` tenso
This operator is the inverse of `ScatterND`.
-`Example 1`
-
- batch_dims = 0
-
- data = [[0,1],[2,3]] # data_shape = [2, 2]
-
- indices = [[0,0],[1,1]] # indices_shape = [2, 2]
-
- output = [0,3] # output_shape = [2]
-
-`Example 2`
-
- batch_dims = 0
-
- data = [[0,1],[2,3]] # data_shape = [2, 2]
-
- indices = [[1],[0]] # indices_shape = [2, 1]
+**Example 1**
- output = [[2,3],[0,1]] # output_shape = [2, 2]
-
-`Example 3`
-
- batch_dims = 0
-
- data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]
-
- indices = [[0,1],[1,0]] # indices_shape = [2, 2]
-
- output = [[2,3],[4,5]] # output_shape = [2, 2]
-
-`Example 4`
-
- batch_dims = 0
-
- data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]
-
- indices = [[[0,1]],[[1,0]]] # indices_shape = [2, 1, 2]
+```
+batch_dims = 0
+data = [[0,1],[2,3]] # data_shape = [2, 2]
+indices = [[0,0],[1,1]] # indices_shape = [2, 2]
+output = [0,3] # output_shape = [2]
+```
- output = [[[2,3]],[[4,5]]] # output_shape = [2, 1, 2]
+**Example 2**
-`Example 5`
+```
+batch_dims = 0
+data = [[0,1],[2,3]] # data_shape = [2, 2]
+indices = [[1],[0]] # indices_shape = [2, 1]
+output = [[2,3],[0,1]] # output_shape = [2, 2]
+```
- batch_dims = 1
+**Example 3**
- data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]
+```
+batch_dims = 0
+data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]
+indices = [[0,1],[1,0]] # indices_shape = [2, 2]
+output = [[2,3],[4,5]] # output_shape = [2, 2]
+```
- indices = [[1],[0]] # indices_shape = [2, 1]
+**Example 4**
- output = [[2,3],[4,5]] # output_shape = [2, 2]
+```
+batch_dims = 0
+data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]
+indices = [[[0,1]],[[1,0]]] # indices_shape = [2, 1, 2]
+output = [[[2,3]],[[4,5]]] # output_shape = [2, 1, 2]
+```
+**Example 5**
+```
+batch_dims = 1
+data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]
+indices = [[1],[0]] # indices_shape = [2, 1]
+output = [[2,3],[4,5]] # output_shape = [2, 2]
+```
Python version: `onnx_ops.gathernd(data, indices, batch_dims)`
@@ -1696,21 +1695,28 @@ Python version: `onnx_ops.max(data_0)`
the tensor according to kernel sizes, stride sizes, and pad lengths.
max pooling consisting of computing the max on all values of a
subset of the input tensor according to the kernel size and downsampling the
- data into the output tensor Y for further processing. The output spatial shape will be following:
+ data into the output tensor Y for further processing. The output spatial shape is calculated differently
+ depending on whether explicit padding is used, where pads is employed, or auto padding is used, where auto_pad is utilized.
+ With explicit padding (https://pytorch.org/docs/stable/generated/torch.nn.MaxPool2d.html?highlight=maxpool#torch.nn.MaxPool2d):
```
- output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1)) / strides_spatial_shape[i] + 1)
+ output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i] - dilation[i] * (kernel_shape[i] - 1) - 1) / strides_spatial_shape[i] + 1)
```
or
```
- output_spatial_shape[i] = ceil((input_spatial_shape[i] + pad_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1)) / strides_spatial_shape[i] + 1)
+ output_spatial_shape[i] = ceil((input_spatial_shape[i] + pad_shape[i] - dilation[i] * (kernel_shape[i] - 1) - 1) / strides_spatial_shape[i] + 1)
```
- if ceil_mode is enabled `pad_shape[i]` is the sum of pads along axis `i`.
+ if ceil_mode is enabled. `pad_shape[i]` is the sum of pads along axis `i`.
- `auto_pad` is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following:
+ `auto_pad` is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following when ceil_mode is enabled:
```
VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) + 1) / strides_spatial_shape[i])
SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])
```
+ or when ceil_mode is disabled (https://www.tensorflow.org/api_docs/python/tf/keras/layers/AveragePooling2D):
+ ```
+ VALID: output_spatial_shape[i] = floor((input_spatial_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1)) / strides_spatial_shape[i]) + 1
+ SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = floor((input_spatial_shape[i] - 1) / strides_spatial_shape[i]) + 1
+ ```
And pad shape will be following if `SAME_UPPER` or `SAME_LOWER`:
```
pad_shape[i] = (output_spatial_shape[i] - 1) * strides_spatial_shape[i] + ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) - input_spatial_shape[i]
@@ -1739,7 +1745,7 @@ Python version: `onnx_ops.maxroipool(X, rois, pooled_shape, spatial_scale)`
MaxUnpool essentially computes the partial inverse of the MaxPool op.
The input information to this op is typically the output information from a MaxPool op. The first
input tensor X is the tensor that needs to be unpooled, which is typically the pooled tensor (first output)
- from MaxPool. The second input tensor, I, contains the indices to the (locally maximal) elements corrsponding
+ from MaxPool. The second input tensor, I, contains the indices to the (locally maximal) elements corresponding
to the elements in the first input tensor X. Input tensor I is typically the second output of the MaxPool op.
The third (optional) input is a tensor that specifies the output size of the unpooling operation.
@@ -1752,7 +1758,7 @@ MaxUnpool can produce the same output size for several input sizes, which makes
known/predictable size.
In addition to the inputs, MaxUnpool takes three attributes, namely kernel_shape, strides, and pads,
- which define the exact unpooling op. The attributes typically have the same values as the corrsponding
+ which define the exact unpooling op. The attributes typically have the same values as the corresponding
pooling op that the unpooling op is trying to invert.
@@ -2402,12 +2408,13 @@ Python version: `onnx_ops.reciprocal(X)`
## ReduceL1
Computes the L1 norm of the input tensor's elements along the provided axes. The resulting
-tensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then
+tensor has the same rank as the input if `keepdims` equals 1. If `keepdims` equals 0, then
the resulting tensor has the reduced dimension pruned. Input tensors of rank zero are
-valid.
+valid. Reduction over an empty set of values yields 0.
+
-The above behavior is similar to numpy, with the exception that numpy defaults keepdims to
-False instead of True.
+The above behavior is similar to numpy, with the exception that numpy defaults `keepdims`
+to `False` instead of `True`.
Python version: `onnx_ops.reducel1(data, axes, keepdims)`
@@ -2417,12 +2424,13 @@ Python version: `onnx_ops.reducel1(data, axes, keepdims)`
## ReduceL2
Computes the L2 norm of the input tensor's elements along the provided axes. The resulting
-tensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then
+tensor has the same rank as the input if `keepdims` equals 1. If `keepdims` equals 0, then
the resulting tensor has the reduced dimension pruned. Input tensors of rank zero are
-valid.
+valid. Reduction over an empty set of values yields 0.
+
-The above behavior is similar to numpy, with the exception that numpy defaults keepdims to
-False instead of True.
+The above behavior is similar to numpy, with the exception that numpy defaults `keepdims`
+to `False` instead of `True`.
Python version: `onnx_ops.reducel2(data, axes, keepdims)`
@@ -2432,12 +2440,13 @@ Python version: `onnx_ops.reducel2(data, axes, keepdims)`
## ReduceLogSum
Computes the log sum of the input tensor's elements along the provided axes. The resulting
-tensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then
+tensor has the same rank as the input if `keepdims` equals 1. If `keepdims` equals 0, then
the resulting tensor has the reduced dimension pruned. Input tensors of rank zero are
-valid.
+valid. Reduction over an empty set of values yields minus infinity (if supported by the datatype) or undefined otherwise.
-The above behavior is similar to numpy, with the exception that numpy defaults keepdims to
-False instead of True.
+
+The above behavior is similar to numpy, with the exception that numpy defaults `keepdims`
+to `False` instead of `True`.
Python version: `onnx_ops.reducelogsum(data, axes, keepdims)`
@@ -2447,12 +2456,13 @@ Python version: `onnx_ops.reducelogsum(data, axes, keepdims)`
## ReduceLogSumExp
Computes the log sum exponent of the input tensor's elements along the provided axes. The resulting
-tensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then
+tensor has the same rank as the input if `keepdims` equals 1. If `keepdims` equals 0, then
the resulting tensor has the reduced dimension pruned. Input tensors of rank zero are
-valid.
+valid. Reduction over an empty set of values yields minus infinity (if supported by the datatype) or undefined otherwise.
+
-The above behavior is similar to numpy, with the exception that numpy defaults keepdims to
-False instead of True.
+The above behavior is similar to numpy, with the exception that numpy defaults `keepdims`
+to `False` instead of `True`.
Python version: `onnx_ops.reducelogsumnumpy.exp(data, axes, keepdims)`
@@ -2462,12 +2472,13 @@ Python version: `onnx_ops.reducelogsumnumpy.exp(data, axes, keepdims)`
## ReduceMax
Computes the max of the input tensor's elements along the provided axes. The resulting
-tensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then
+tensor has the same rank as the input if `keepdims` equals 1. If `keepdims` equals 0, then
the resulting tensor has the reduced dimension pruned. Input tensors of rank zero are
-valid.
+valid. Reduction over an empty set of values yields minus infinity (if supported by the datatype) or the minimum value of the data type otherwise.
+
-The above behavior is similar to numpy, with the exception that numpy defaults keepdims to
-False instead of True.
+The above behavior is similar to numpy, with the exception that numpy defaults `keepdims`
+to `False` instead of `True`.
Python version: `onnx_ops.reducemax(data, axes, keepdims)`
@@ -2477,12 +2488,13 @@ Python version: `onnx_ops.reducemax(data, axes, keepdims)`
## ReduceMean
Computes the mean of the input tensor's elements along the provided axes. The resulting
-tensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then
+tensor has the same rank as the input if `keepdims` equals 1. If `keepdims` equals 0, then
the resulting tensor has the reduced dimension pruned. Input tensors of rank zero are
-valid.
+valid. Reduction over an empty set of values yields undefined.
-The above behavior is similar to numpy, with the exception that numpy defaults keepdims to
-False instead of True.
+
+The above behavior is similar to numpy, with the exception that numpy defaults `keepdims`
+to `False` instead of `True`.
Python version: `onnx_ops.reducemean(data, axes, keepdims)`
@@ -2492,12 +2504,13 @@ Python version: `onnx_ops.reducemean(data, axes, keepdims)`
## ReduceMin
Computes the min of the input tensor's elements along the provided axes. The resulting
-tensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then
+tensor has the same rank as the input if `keepdims` equals 1. If `keepdims` equals 0, then
the resulting tensor has the reduced dimension pruned. Input tensors of rank zero are
-valid.
+valid. Reduction over an empty set of values yields plus infinity (if supported by the datatype) or the maximum value of the data type otherwise.
+
-The above behavior is similar to numpy, with the exception that numpy defaults keepdims to
-False instead of True.
+The above behavior is similar to numpy, with the exception that numpy defaults `keepdims`
+to `False` instead of `True`.
Python version: `onnx_ops.reducemin(data, axes, keepdims)`
@@ -2507,12 +2520,13 @@ Python version: `onnx_ops.reducemin(data, axes, keepdims)`
## ReduceProd
Computes the product of the input tensor's elements along the provided axes. The resulting
-tensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then
+tensor has the same rank as the input if `keepdims` equals 1. If `keepdims` equals 0, then
the resulting tensor has the reduced dimension pruned. Input tensors of rank zero are
-valid.
+valid. Reduction over an empty set of values yields 1.
+
-The above behavior is similar to numpy, with the exception that numpy defaults keepdims to
-False instead of True.
+The above behavior is similar to numpy, with the exception that numpy defaults `keepdims`
+to `False` instead of `True`.
Python version: `onnx_ops.reduceprod(data, axes, keepdims)`
@@ -2522,12 +2536,13 @@ Python version: `onnx_ops.reduceprod(data, axes, keepdims)`
## ReduceSum
Computes the sum of the input tensor's elements along the provided axes. The resulting
-tensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then
+tensor has the same rank as the input if `keepdims` equals 1. If `keepdims` equals 0, then
the resulting tensor has the reduced dimension pruned. Input tensors of rank zero are
-valid.
+valid. Reduction over an empty set of values yields 0.
-The above behavior is similar to numpy, with the exception that numpy defaults keepdims to
-False instead of True.
+
+The above behavior is similar to numpy, with the exception that numpy defaults `keepdims`
+to `False` instead of `True`.
Python version: `onnx_ops.reducesum(data, axes, keepdims, noop_with_empty_axes)`
@@ -2537,12 +2552,13 @@ Python version: `onnx_ops.reducesum(data, axes, keepdims, noop_with_empty_axes)`
## ReduceSumSquare
Computes the sum square of the input tensor's elements along the provided axes. The resulting
-tensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then
+tensor has the same rank as the input if `keepdims` equals 1. If `keepdims` equals 0, then
the resulting tensor has the reduced dimension pruned. Input tensors of rank zero are
-valid.
+valid. Reduction over an empty set of values yields 0.
+
-The above behavior is similar to numpy, with the exception that numpy defaults keepdims to
-False instead of True.
+The above behavior is similar to numpy, with the exception that numpy defaults `keepdims`
+to `False` instead of `True`.
Python version: `onnx_ops.reducesumsquare(data, axes, keepdims)`
@@ -2661,7 +2677,7 @@ Python version: `onnx_ops.roialign(X, rois, batch_indices, mode, output_height,
Round takes one input Tensor and rounds the values, element-wise, meaning
it finds the nearest integer for each value.
-In case of halfs, the rule is to round them to the nearest even integer.
+In case of halves, the rule is to round them to the nearest even integer.
If input x is integral, +0, -0, NaN, or infinite, x itself is returned.
The output tensor has the same shape and type as the input.
@@ -3072,16 +3088,16 @@ https://numpy.org/doc/stable/user/basics.indexing.html?highlight=slice#slicing-a
Slice uses the `starts`, `ends`, `axes` and `steps` inputs to select a sub-tensor
of its input `data` tensor.
-An effective `start[i]`, `end[i]`, and `step[i]` must be computed for each `i`
+An effective `starts[i]`, `ends[i]`, and `steps[i]` must be computed for each `i`
in `[0, ... r-1]` where `r = rank(input)` as follows:
If `axes` are omitted, they are set to `[0, ..., r-1]`.
If `steps` are omitted, they are set to `[1, ..., 1]` of length `len(starts)`
-The effective values are initialized as `start[i] = 0`, `end[i] = dims[i]` where
-`dims` are the dimensions of `input` and `step[i] = `1.
+The effective values are initialized as `start[i] = 0`, `ends[i] = dims[i]` where
+`dims` are the dimensions of `input` and `steps[i] = 1`.
-All negative elements of `axes` are made non-negatve by adding `r` to them, where
+All negative elements of `axes` are made non-negative by adding `r` to them, where
`r =rank(input)`.
All negative values in `starts[i]` and `ends[i]` have `dims[axes[i]]` added to them,
@@ -3091,10 +3107,10 @@ and `[0, dims[axes[i]]-1]` for negative stepping.
The clamping for the adjusted `ends[i]` depends on the sign of `steps[i]` and must
accommodate copying 0 through `dims[axes[i]]` elements, so for positive stepping
-`end[axes[i]]` is clamped to `[0, dims[axes[i]]]`, while for negative stepping it
+`ends[axes[i]]` is clamped to `[0, dims[axes[i]]]`, while for negative stepping it
is clamped to `[-1, dims[axes[i]]-1]`.
-Finally, `step[axes[i]] = steps[i]`.
+Finally, `steps[axes[i]] = steps[i]`.
For slicing to the end of a dimension with unknown size, it is recommended to pass
in `INT_MAX` when slicing forward and 'INT_MIN' when slicing backward.
@@ -3165,7 +3181,7 @@ After L is available, this operator can optionally do a reduction operator.
* shape(labels): (N) where each value is 0 <= labels[i] <= C-1, or (N, D1, D2,..., Dk),
with K >= 1 in case of K-dimensional loss.
-The loss for one sample, l_i, can caculated as follows:
+The loss for one sample, l_i, can calculated as follows:
```
l[i][d1][d2]...[dk] = -y[i][c][d1][d2]..[dk], where i is the index of classes.
```
@@ -3476,8 +3492,8 @@ Otherwise the input tensor is flattened and unique values of the flattened tenso
This operator returns the unique values or sliced unique subtensors of the input tensor and three optional outputs.
The first output tensor 'Y' contains all unique values or subtensors of the input.
-The second optional output tensor 'indices' contains indices of 'Y' elements' first occurance in 'X'..
-The third optional output tensor 'inverse_indices' contains, for elements of 'X', its corresponding indices in 'Y'. ".
+The second optional output tensor 'indices' contains indices of 'Y' elements' first occurrence in 'X'.
+The third optional output tensor 'inverse_indices' contains, for elements of 'X', its corresponding indices in 'Y'.
The fourth optional output tensor 'counts' contains the count of each element of 'Y' in the input.
Outputs are either sorted in ascending order or optionally in the order of the first occurrence of the values in the input.
diff --git a/docs/MDF_function_specifications.yaml b/docs/MDF_function_specifications.yaml
index 2bb9bf08..6bd5aa99 100644
--- a/docs/MDF_function_specifications.yaml
+++ b/docs/MDF_function_specifications.yaml
@@ -95,7 +95,7 @@ onnx::Abs:
Absolute takes one input data (Tensor) and produces one output data
- (Tensor) where the absolute is, y = abs(x), is applied to
+ (Tensor) where absolute value, y = abs(x), is applied to
the tensor elementwise.
@@ -254,11 +254,15 @@ onnx::AveragePool:
\ + pad_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1)) / strides_spatial_shape[i]\
\ + 1)\n ```\n if ceil_mode is enabled\n\n ```\n * pad_shape[i] is sum of\
\ pads along axis i\n ```\n\n `auto_pad` is a DEPRECATED attribute. If you\
- \ are using them currently, the output spatial shape will be following:\n\
- \ ```\n VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i] - ((kernel_spatial_shape[i]\
+ \ are using them currently, the output spatial shape will be following when\
+ \ ceil_mode is enabled:\n ```\n VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i]\
+ \ - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) + 1) / strides_spatial_shape[i])\n\
+ \ SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i]\
+ \ / strides_spatial_shape[i])\n ```\nor when ceil_mode is disabled:\n ```\n\
+ \ VALID: output_spatial_shape[i] = floor((input_spatial_shape[i] - ((kernel_spatial_shape[i]\
\ - 1) * dilations[i] + 1) + 1) / strides_spatial_shape[i])\n SAME_UPPER or\
- \ SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])\n\
- \ ```\n And pad shape will be following if `SAME_UPPER` or `SAME_LOWER`:\n\
+ \ SAME_LOWER: output_spatial_shape[i] = floor(input_spatial_shape[i] / strides_spatial_shape[i])\n\
+ \ ```\n\n And pad shape will be following if `SAME_UPPER` or `SAME_LOWER`:\n\
\ ```\n pad_shape[i] = (output_spatial_shape[i] - 1) * strides_spatial_shape[i]\
\ + ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) - input_spatial_shape[i]\n\
\ ```\n The output of each pooling window is divided by the number of elements\
@@ -808,7 +812,7 @@ onnx::Dropout:
onnx::DynamicQuantizeLinear:
description: '
- A Function to fuse calculation for Scale, Zero Point and FP32->8Bit convertion
+ A Function to fuse calculation for Scale, Zero Point and FP32->8Bit conversion
of FP32 Input data.
Outputs Scale, ZeroPoint and Quantized Input for a given FP32 Input.
@@ -817,12 +821,12 @@ onnx::DynamicQuantizeLinear:
```
- y_scale = (max(x) - min(x))/(qmax - qmin)
+ y_scale = (maximum(0, max(x)) - minimum(0, min(x))) / (qmax - qmin)
```
- * where qmax and qmin are max and min values for quantization range .i.e [0,
+ * where qmax and qmin are max and min values for quantization range i.e. [0,
255] in case of uint8
* data range is adjusted to include 0.
@@ -875,7 +879,7 @@ onnx::Einsum:
```
- output[output-term] = reduce-sum( input1[term1] * input2[term] )
+ output[output-term] = reduce-sum( input1[term1] * input2[term2] )
```
@@ -1175,22 +1179,22 @@ onnx::GatherND:
\ tensors as `indices_slice`. Each *tensor slice* corresponding\n to `data[0:b-1,\
\ indices_slice , :]` is filled into the corresponding location of the `(q-b-1)`-dimensional\
\ tensor\n to form the `output` tensor (Examples 2, 3, 4 and 5 below)\n\n\
- This operator is the inverse of `ScatterND`.\n\n`Example 1`\n\n batch_dims\
- \ = 0\n\n data = [[0,1],[2,3]] # data_shape = [2, 2]\n\n indices =\
- \ [[0,0],[1,1]] # indices_shape = [2, 2]\n\n output = [0,3] \
- \ # output_shape = [2]\n\n`Example 2`\n\n batch_dims = 0\n\n data = [[0,1],[2,3]]\
- \ # data_shape = [2, 2]\n\n indices = [[1],[0]] # indices_shape = [2,\
- \ 1]\n\n output = [[2,3],[0,1]] # output_shape = [2, 2]\n\n`Example 3`\n\
- \n batch_dims = 0\n\n data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape\
- \ = [2, 2, 2]\n\n indices = [[0,1],[1,0]] # indices_shape\
- \ = [2, 2]\n\n output = [[2,3],[4,5]] # output_shape = [2,\
- \ 2]\n\n`Example 4`\n\n batch_dims = 0\n\n data = [[[0,1],[2,3]],[[4,5],[6,7]]]\
- \ # data_shape = [2, 2, 2]\n\n indices = [[[0,1]],[[1,0]]] #\
- \ indices_shape = [2, 1, 2]\n\n output = [[[2,3]],[[4,5]]] #\
- \ output_shape = [2, 1, 2]\n\n`Example 5`\n\n batch_dims = 1\n\n data \
- \ = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]\n\n indices =\
- \ [[1],[0]] # indices_shape = [2, 1]\n\n output = [[2,3],[4,5]]\
- \ # output_shape = [2, 2]\n\n\n"
+ This operator is the inverse of `ScatterND`.\n\n**Example 1**\n\n```\nbatch_dims\
+ \ = 0\ndata = [[0,1],[2,3]] # data_shape = [2, 2]\nindices = [[0,0],[1,1]]\
+ \ # indices_shape = [2, 2]\noutput = [0,3] # output_shape =\
+ \ [2]\n```\n\n**Example 2**\n\n```\nbatch_dims = 0\ndata = [[0,1],[2,3]]\
+ \ # data_shape = [2, 2]\nindices = [[1],[0]] # indices_shape = [2,\
+ \ 1]\noutput = [[2,3],[0,1]] # output_shape = [2, 2]\n```\n\n**Example\
+ \ 3**\n\n```\nbatch_dims = 0\ndata = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape\
+ \ = [2, 2, 2]\nindices = [[0,1],[1,0]] # indices_shape\
+ \ = [2, 2]\noutput = [[2,3],[4,5]] # output_shape = [2,\
+ \ 2]\n```\n\n**Example 4**\n\n```\nbatch_dims = 0\ndata = [[[0,1],[2,3]],[[4,5],[6,7]]]\
+ \ # data_shape = [2, 2, 2]\nindices = [[[0,1]],[[1,0]]] # indices_shape\
+ \ = [2, 1, 2]\noutput = [[[2,3]],[[4,5]]] # output_shape = [2,\
+ \ 1, 2]\n```\n\n**Example 5**\n\n```\nbatch_dims = 1\ndata = [[[0,1],[2,3]],[[4,5],[6,7]]]\
+ \ # data_shape = [2, 2, 2]\nindices = [[1],[0]] # indices_shape\
+ \ = [2, 1]\noutput = [[2,3],[4,5]] # output_shape = [2,\
+ \ 2]\n```\n"
arguments:
- data
- indices
@@ -1637,21 +1641,29 @@ onnx::MaxPool:
\ the tensor according to kernel sizes, stride sizes, and pad lengths.\n max\
\ pooling consisting of computing the max on all values of a\n subset of the\
\ input tensor according to the kernel size and downsampling the\n data into\
- \ the output tensor Y for further processing. The output spatial shape will\
- \ be following:\n ```\n output_spatial_shape[i] = floor((input_spatial_shape[i]\
- \ + pad_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1)) / strides_spatial_shape[i]\
- \ + 1)\n ```\n or\n ```\n output_spatial_shape[i] = ceil((input_spatial_shape[i]\
- \ + pad_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1)) / strides_spatial_shape[i]\
- \ + 1)\n ```\n if ceil_mode is enabled `pad_shape[i]` is the sum of pads along\
- \ axis `i`.\n\n `auto_pad` is a DEPRECATED attribute. If you are using them\
- \ currently, the output spatial shape will be following:\n ```\n VALID: output_spatial_shape[i]\
- \ = ceil((input_spatial_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i]\
- \ + 1) + 1) / strides_spatial_shape[i])\n SAME_UPPER or SAME_LOWER: output_spatial_shape[i]\
- \ = ceil(input_spatial_shape[i] / strides_spatial_shape[i])\n ```\n And pad\
- \ shape will be following if `SAME_UPPER` or `SAME_LOWER`:\n ```\n pad_shape[i]\
- \ = (output_spatial_shape[i] - 1) * strides_spatial_shape[i] + ((kernel_spatial_shape[i]\
- \ - 1) * dilations[i] + 1) - input_spatial_shape[i]\n ```\n The output of\
- \ each pooling window is maximum number of elements exclude pad. \n "
+ \ the output tensor Y for further processing. The output spatial shape is\
+ \ calculated differently\n depending on whether explicit padding is used,\
+ \ where pads is employed, or auto padding is used, where auto_pad is utilized.\n\
+ \ With explicit padding (https://pytorch.org/docs/stable/generated/torch.nn.MaxPool2d.html?highlight=maxpool#torch.nn.MaxPool2d):\n\
+ \ ```\n output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i]\
+ \ - dilation[i] * (kernel_shape[i] - 1) - 1) / strides_spatial_shape[i] +\
+ \ 1)\n ```\n or\n ```\n output_spatial_shape[i] = ceil((input_spatial_shape[i]\
+ \ + pad_shape[i] - dilation[i] * (kernel_shape[i] - 1) - 1) / strides_spatial_shape[i]\
+ \ + 1)\n ```\n if ceil_mode is enabled. `pad_shape[i]` is the sum of pads\
+ \ along axis `i`.\n\n `auto_pad` is a DEPRECATED attribute. If you are using\
+ \ them currently, the output spatial shape will be following when ceil_mode\
+ \ is enabled:\n ```\n VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i]\
+ \ - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) + 1) / strides_spatial_shape[i])\n\
+ \ SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i]\
+ \ / strides_spatial_shape[i])\n ```\n or when ceil_mode is disabled (https://www.tensorflow.org/api_docs/python/tf/keras/layers/AveragePooling2D):\n\
+ \ ```\n VALID: output_spatial_shape[i] = floor((input_spatial_shape[i] - ((kernel_spatial_shape[i]\
+ \ - 1) * dilations[i] + 1)) / strides_spatial_shape[i]) + 1\n SAME_UPPER or\
+ \ SAME_LOWER: output_spatial_shape[i] = floor((input_spatial_shape[i] - 1)\
+ \ / strides_spatial_shape[i]) + 1\n ```\n And pad shape will be following\
+ \ if `SAME_UPPER` or `SAME_LOWER`:\n ```\n pad_shape[i] = (output_spatial_shape[i]\
+ \ - 1) * strides_spatial_shape[i] + ((kernel_spatial_shape[i] - 1) * dilations[i]\
+ \ + 1) - input_spatial_shape[i]\n ```\n The output of each pooling window\
+ \ is maximum number of elements exclude pad. \n "
arguments:
- X
expression_string: onnx_ops.maxpool(X, auto_pad, ceil_mode, dilations, kernel_shape,
@@ -1670,7 +1682,7 @@ onnx::MaxUnpool:
\ from a MaxPool op. The first\n input tensor X is the tensor that needs to\
\ be unpooled, which is typically the pooled tensor (first output)\n from\
\ MaxPool. The second input tensor, I, contains the indices to the (locally\
- \ maximal) elements corrsponding\n to the elements in the first input tensor\
+ \ maximal) elements corresponding\n to the elements in the first input tensor\
\ X. Input tensor I is typically the second output of the MaxPool op.\n The\
\ third (optional) input is a tensor that specifies the output size of the\
\ unpooling operation.\n\nMaxUnpool is intended to do 'partial' inverse of\
@@ -1683,7 +1695,7 @@ onnx::MaxUnpool:
\ output tensor of\n known/predictable size.\n\nIn addition to the inputs,\
\ MaxUnpool takes three attributes, namely kernel_shape, strides, and pads,\n\
\ which define the exact unpooling op. The attributes typically have the same\
- \ values as the corrsponding\n pooling op that the unpooling op is trying\
+ \ values as the corresponding\n pooling op that the unpooling op is trying\
\ to invert.\n"
arguments:
- X
@@ -2338,19 +2350,20 @@ onnx::ReduceL1:
Computes the L1 norm of the input tensor''s elements along the provided axes.
The resulting
- tensor has the same rank as the input if keepdims equals 1. If keepdims equals
- 0, then
+ tensor has the same rank as the input if `keepdims` equals 1. If `keepdims`
+ equals 0, then
the resulting tensor has the reduced dimension pruned. Input tensors of rank
zero are
- valid.
+ valid. Reduction over an empty set of values yields 0.
+
The above behavior is similar to numpy, with the exception that numpy defaults
- keepdims to
+ `keepdims`
- False instead of True.'
+ to `False` instead of `True`.'
arguments:
- data
expression_string: onnx_ops.reducel1(data, axes, keepdims)
@@ -2360,19 +2373,20 @@ onnx::ReduceL2:
Computes the L2 norm of the input tensor''s elements along the provided axes.
The resulting
- tensor has the same rank as the input if keepdims equals 1. If keepdims equals
- 0, then
+ tensor has the same rank as the input if `keepdims` equals 1. If `keepdims`
+ equals 0, then
the resulting tensor has the reduced dimension pruned. Input tensors of rank
zero are
- valid.
+ valid. Reduction over an empty set of values yields 0.
+
The above behavior is similar to numpy, with the exception that numpy defaults
- keepdims to
+ `keepdims`
- False instead of True.'
+ to `False` instead of `True`.'
arguments:
- data
expression_string: onnx_ops.reducel2(data, axes, keepdims)
@@ -2382,19 +2396,21 @@ onnx::ReduceLogSum:
Computes the log sum of the input tensor''s elements along the provided axes.
The resulting
- tensor has the same rank as the input if keepdims equals 1. If keepdims equals
- 0, then
+ tensor has the same rank as the input if `keepdims` equals 1. If `keepdims`
+ equals 0, then
the resulting tensor has the reduced dimension pruned. Input tensors of rank
zero are
- valid.
+ valid. Reduction over an empty set of values yields minus infinity (if supported
+ by the datatype) or undefined otherwise.
+
The above behavior is similar to numpy, with the exception that numpy defaults
- keepdims to
+ `keepdims`
- False instead of True.'
+ to `False` instead of `True`.'
arguments:
- data
expression_string: onnx_ops.reducelogsum(data, axes, keepdims)
@@ -2404,19 +2420,21 @@ onnx::ReduceLogSumExp:
Computes the log sum exponent of the input tensor''s elements along the provided
axes. The resulting
- tensor has the same rank as the input if keepdims equals 1. If keepdims equals
- 0, then
+ tensor has the same rank as the input if `keepdims` equals 1. If `keepdims`
+ equals 0, then
the resulting tensor has the reduced dimension pruned. Input tensors of rank
zero are
- valid.
+ valid. Reduction over an empty set of values yields minus infinity (if supported
+ by the datatype) or undefined otherwise.
+
The above behavior is similar to numpy, with the exception that numpy defaults
- keepdims to
+ `keepdims`
- False instead of True.'
+ to `False` instead of `True`.'
arguments:
- data
expression_string: onnx_ops.reducelogsumexp(data, axes, keepdims)
@@ -2426,19 +2444,21 @@ onnx::ReduceMax:
Computes the max of the input tensor''s elements along the provided axes.
The resulting
- tensor has the same rank as the input if keepdims equals 1. If keepdims equals
- 0, then
+ tensor has the same rank as the input if `keepdims` equals 1. If `keepdims`
+ equals 0, then
the resulting tensor has the reduced dimension pruned. Input tensors of rank
zero are
- valid.
+ valid. Reduction over an empty set of values yields minus infinity (if supported
+ by the datatype) or the minimum value of the data type otherwise.
+
The above behavior is similar to numpy, with the exception that numpy defaults
- keepdims to
+ `keepdims`
- False instead of True.'
+ to `False` instead of `True`.'
arguments:
- data
expression_string: onnx_ops.reducemax(data, axes, keepdims)
@@ -2448,19 +2468,20 @@ onnx::ReduceMean:
Computes the mean of the input tensor''s elements along the provided axes.
The resulting
- tensor has the same rank as the input if keepdims equals 1. If keepdims equals
- 0, then
+ tensor has the same rank as the input if `keepdims` equals 1. If `keepdims`
+ equals 0, then
the resulting tensor has the reduced dimension pruned. Input tensors of rank
zero are
- valid.
+ valid. Reduction over an empty set of values yields undefined.
+
The above behavior is similar to numpy, with the exception that numpy defaults
- keepdims to
+ `keepdims`
- False instead of True.'
+ to `False` instead of `True`.'
arguments:
- data
expression_string: onnx_ops.reducemean(data, axes, keepdims)
@@ -2470,19 +2491,21 @@ onnx::ReduceMin:
Computes the min of the input tensor''s elements along the provided axes.
The resulting
- tensor has the same rank as the input if keepdims equals 1. If keepdims equals
- 0, then
+ tensor has the same rank as the input if `keepdims` equals 1. If `keepdims`
+ equals 0, then
the resulting tensor has the reduced dimension pruned. Input tensors of rank
zero are
- valid.
+ valid. Reduction over an empty set of values yields plus infinity (if supported
+ by the datatype) or the maximum value of the data type otherwise.
+
The above behavior is similar to numpy, with the exception that numpy defaults
- keepdims to
+ `keepdims`
- False instead of True.'
+ to `False` instead of `True`.'
arguments:
- data
expression_string: onnx_ops.reducemin(data, axes, keepdims)
@@ -2492,19 +2515,20 @@ onnx::ReduceProd:
Computes the product of the input tensor''s elements along the provided axes.
The resulting
- tensor has the same rank as the input if keepdims equals 1. If keepdims equals
- 0, then
+ tensor has the same rank as the input if `keepdims` equals 1. If `keepdims`
+ equals 0, then
the resulting tensor has the reduced dimension pruned. Input tensors of rank
zero are
- valid.
+ valid. Reduction over an empty set of values yields 1.
+
The above behavior is similar to numpy, with the exception that numpy defaults
- keepdims to
+ `keepdims`
- False instead of True.'
+ to `False` instead of `True`.'
arguments:
- data
expression_string: onnx_ops.reduceprod(data, axes, keepdims)
@@ -2514,19 +2538,20 @@ onnx::ReduceSum:
Computes the sum of the input tensor''s elements along the provided axes.
The resulting
- tensor has the same rank as the input if keepdims equals 1. If keepdims equals
- 0, then
+ tensor has the same rank as the input if `keepdims` equals 1. If `keepdims`
+ equals 0, then
the resulting tensor has the reduced dimension pruned. Input tensors of rank
zero are
- valid.
+ valid. Reduction over an empty set of values yields 0.
+
The above behavior is similar to numpy, with the exception that numpy defaults
- keepdims to
+ `keepdims`
- False instead of True.'
+ to `False` instead of `True`.'
arguments:
- data
- axes
@@ -2537,19 +2562,20 @@ onnx::ReduceSumSquare:
Computes the sum square of the input tensor''s elements along the provided
axes. The resulting
- tensor has the same rank as the input if keepdims equals 1. If keepdims equals
- 0, then
+ tensor has the same rank as the input if `keepdims` equals 1. If `keepdims`
+ equals 0, then
the resulting tensor has the reduced dimension pruned. Input tensors of rank
zero are
- valid.
+ valid. Reduction over an empty set of values yields 0.
+
The above behavior is similar to numpy, with the exception that numpy defaults
- keepdims to
+ `keepdims`
- False instead of True.'
+ to `False` instead of `True`.'
arguments:
- data
expression_string: onnx_ops.reducesumsquare(data, axes, keepdims)
@@ -2680,7 +2706,7 @@ onnx::Round:
it finds the nearest integer for each value.
- In case of halfs, the rule is to round them to the nearest even integer.
+ In case of halves, the rule is to round them to the nearest even integer.
If input x is integral, +0, -0, NaN, or infinite, x itself is returned.
@@ -3069,22 +3095,22 @@ onnx::Slice:
description: "\nProduces a slice of the input tensor along multiple axes. Similar\
\ to numpy:\nhttps://numpy.org/doc/stable/user/basics.indexing.html?highlight=slice#slicing-and-striding\n\
\nSlice uses the `starts`, `ends`, `axes` and `steps` inputs to select a sub-tensor\n\
- of its input `data` tensor.\n\nAn effective `start[i]`, `end[i]`, and `step[i]`\
+ of its input `data` tensor.\n\nAn effective `starts[i]`, `ends[i]`, and `steps[i]`\
\ must be computed for each `i`\nin `[0, ... r-1]` where `r = rank(input)`\
\ as follows:\n\nIf `axes` are omitted, they are set to `[0, ..., r-1]`.\n\
If `steps` are omitted, they are set to `[1, ..., 1]` of length `len(starts)`\n\
- \nThe effective values are initialized as `start[i] = 0`, `end[i] = dims[i]`\
- \ where\n`dims` are the dimensions of `input` and `step[i] = `1.\n\nAll negative\
- \ elements of `axes` are made non-negatve by adding `r` to them, where\n`r\
- \ =rank(input)`.\n\nAll negative values in `starts[i]` and `ends[i]` have\
+ \nThe effective values are initialized as `start[i] = 0`, `ends[i] = dims[i]`\
+ \ where\n`dims` are the dimensions of `input` and `steps[i] = 1`.\n\nAll negative\
+ \ elements of `axes` are made non-negative by adding `r` to them, where\n\
+ `r =rank(input)`.\n\nAll negative values in `starts[i]` and `ends[i]` have\
\ `dims[axes[i]]` added to them,\nwhere `dims` are the dimensions of `input`.\
\ Then `start[axes[i]]` is the adjusted\n`starts[i]` is clamped into the range\
\ `[0, dims[axes[i]]]` for positive stepping\nand `[0, dims[axes[i]]-1]` for\
\ negative stepping.\n\nThe clamping for the adjusted `ends[i]` depends on\
\ the sign of `steps[i]` and must\naccommodate copying 0 through `dims[axes[i]]`\
- \ elements, so for positive stepping\n`end[axes[i]]` is clamped to `[0, dims[axes[i]]]`,\
+ \ elements, so for positive stepping\n`ends[axes[i]]` is clamped to `[0, dims[axes[i]]]`,\
\ while for negative stepping it\nis clamped to `[-1, dims[axes[i]]-1]`.\n\
- \nFinally, `step[axes[i]] = steps[i]`.\n\nFor slicing to the end of a dimension\
+ \nFinally, `steps[axes[i]] = steps[i]`.\n\nFor slicing to the end of a dimension\
\ with unknown size, it is recommended to pass\nin `INT_MAX` when slicing\
\ forward and 'INT_MIN' when slicing backward.\n\nExample 1:\n\n```\ndata\
\ = [\n [1, 2, 3, 4],\n [5, 6, 7, 8],\n]\naxes = [0, 1]\nstarts = [1,\
@@ -3119,7 +3145,7 @@ onnx::SoftmaxCrossEntropyLoss:
\ the number of classes, or (N, C, D1, D2,..., Dk),\n with K >= 1 in case\
\ of K-dimensional loss.\n* shape(labels): (N) where each value is 0 <= labels[i]\
\ <= C-1, or (N, D1, D2,..., Dk),\n with K >= 1 in case of K-dimensional\
- \ loss.\n\nThe loss for one sample, l_i, can caculated as follows:\n```\n\
+ \ loss.\n\nThe loss for one sample, l_i, can calculated as follows:\n```\n\
l[i][d1][d2]...[dk] = -y[i][c][d1][d2]..[dk], where i is the index of classes.\n\
```\nor\n```\nl[i][d1][d2]...[dk] = -y[i][c][d1][d2]..[dk] * weights[c], if\
\ 'weights' is provided.\n```\n\nloss is zero for the case when label-value\
@@ -3512,12 +3538,11 @@ onnx::Unique:
\ unique subtensors of the input tensor and three optional outputs.\nThe first\
\ output tensor 'Y' contains all unique values or subtensors of the input.\n\
The second optional output tensor 'indices' contains indices of 'Y' elements'\
- \ first occurance in 'X'..\nThe third optional output tensor 'inverse_indices'\
- \ contains, for elements of 'X', its corresponding indices in 'Y'. \".\nThe\
- \ fourth optional output tensor 'counts' contains the count of each element\
- \ of 'Y' in the input.\n\nOutputs are either sorted in ascending order or\
- \ optionally in the order of the first occurrence of the values in the input.\n\
- \nhttps://docs.scipy.org/doc/numpy/reference/generated/numpy.unique.html\n\
+ \ first occurrence in 'X'.\nThe third optional output tensor 'inverse_indices'\
+ \ contains, for elements of 'X', its corresponding indices in 'Y'.\nThe fourth\
+ \ optional output tensor 'counts' contains the count of each element of 'Y'\
+ \ in the input.\n\nOutputs are either sorted in ascending order or optionally\
+ \ in the order of the first occurrence of the values in the input.\n\nhttps://docs.scipy.org/doc/numpy/reference/generated/numpy.unique.html\n\
\nExample 1:\n```\ninput_X = [2, 1, 1, 3, 4, 3]\nattribute_sorted = 0\nattribute_axis\
\ = None\noutput_Y = [2, 1, 3, 4]\noutput_indices = [0, 1, 3, 4]\noutput_inverse_indices\
\ = [0, 1, 1, 2, 3, 2]\noutput_counts = [1, 2, 2, 1]\n```\n\nExample 2:\n\
diff --git a/docs/sphinx/source/api/MDF_function_specifications.md b/docs/sphinx/source/api/MDF_function_specifications.md
index db58d62b..69848e9c 100644
--- a/docs/sphinx/source/api/MDF_function_specifications.md
+++ b/docs/sphinx/source/api/MDF_function_specifications.md
@@ -306,7 +306,7 @@ Python version: `actr.match_production(production,context)`
## Abs
Absolute takes one input data (Tensor) and produces one output data
-(Tensor) where the absolute is, y = abs(x), is applied to
+(Tensor) where absolute value, y = abs(x), is applied to
the tensor elementwise.
@@ -452,11 +452,17 @@ Python version: `onnx_ops.anumpy.tanh(input)`
* pad_shape[i] is sum of pads along axis i
```
- `auto_pad` is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following:
+ `auto_pad` is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following when ceil_mode is enabled:
```
VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) + 1) / strides_spatial_shape[i])
SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])
```
+or when ceil_mode is disabled:
+ ```
+ VALID: output_spatial_shape[i] = floor((input_spatial_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) + 1) / strides_spatial_shape[i])
+ SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = floor(input_spatial_shape[i] / strides_spatial_shape[i])
+ ```
+
And pad shape will be following if `SAME_UPPER` or `SAME_LOWER`:
```
pad_shape[i] = (output_spatial_shape[i] - 1) * strides_spatial_shape[i] + ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) - input_spatial_shape[i]
@@ -889,14 +895,14 @@ Python version: `onnx_ops.dropout(data, ratio, training_mode, seed)`
## DynamicQuantizeLinear
-A Function to fuse calculation for Scale, Zero Point and FP32->8Bit convertion of FP32 Input data.
+A Function to fuse calculation for Scale, Zero Point and FP32->8Bit conversion of FP32 Input data.
Outputs Scale, ZeroPoint and Quantized Input for a given FP32 Input.
Scale is calculated as:
```
-y_scale = (max(x) - min(x))/(qmax - qmin)
+y_scale = (maximum(0, max(x)) - minimum(0, min(x))) / (qmax - qmin)
```
-* where qmax and qmin are max and min values for quantization range .i.e [0, 255] in case of uint8
+* where qmax and qmin are max and min values for quantization range i.e. [0, 255] in case of uint8
* data range is adjusted to include 0.
Zero point is calculated as:
@@ -928,7 +934,7 @@ Python version: `onnx_ops.dynamicquantizelinear(x)`
An einsum of the form `term1, term2 -> output-term` produces an output tensor using the following equation
```
-output[output-term] = reduce-sum( input1[term1] * input2[term] )
+output[output-term] = reduce-sum( input1[term1] * input2[term2] )
```
where the reduce-sum performs a summation over all the indices occurring in the input terms (term1, term2)
@@ -1276,57 +1282,50 @@ The output tensor is obtained by mapping each index-tuple in the `indices` tenso
This operator is the inverse of `ScatterND`.
-`Example 1`
-
- batch_dims = 0
-
- data = [[0,1],[2,3]] # data_shape = [2, 2]
-
- indices = [[0,0],[1,1]] # indices_shape = [2, 2]
-
- output = [0,3] # output_shape = [2]
-
-`Example 2`
-
- batch_dims = 0
-
- data = [[0,1],[2,3]] # data_shape = [2, 2]
-
- indices = [[1],[0]] # indices_shape = [2, 1]
+**Example 1**
- output = [[2,3],[0,1]] # output_shape = [2, 2]
-
-`Example 3`
-
- batch_dims = 0
-
- data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]
-
- indices = [[0,1],[1,0]] # indices_shape = [2, 2]
-
- output = [[2,3],[4,5]] # output_shape = [2, 2]
-
-`Example 4`
-
- batch_dims = 0
-
- data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]
-
- indices = [[[0,1]],[[1,0]]] # indices_shape = [2, 1, 2]
+```
+batch_dims = 0
+data = [[0,1],[2,3]] # data_shape = [2, 2]
+indices = [[0,0],[1,1]] # indices_shape = [2, 2]
+output = [0,3] # output_shape = [2]
+```
- output = [[[2,3]],[[4,5]]] # output_shape = [2, 1, 2]
+**Example 2**
-`Example 5`
+```
+batch_dims = 0
+data = [[0,1],[2,3]] # data_shape = [2, 2]
+indices = [[1],[0]] # indices_shape = [2, 1]
+output = [[2,3],[0,1]] # output_shape = [2, 2]
+```
- batch_dims = 1
+**Example 3**
- data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]
+```
+batch_dims = 0
+data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]
+indices = [[0,1],[1,0]] # indices_shape = [2, 2]
+output = [[2,3],[4,5]] # output_shape = [2, 2]
+```
- indices = [[1],[0]] # indices_shape = [2, 1]
+**Example 4**
- output = [[2,3],[4,5]] # output_shape = [2, 2]
+```
+batch_dims = 0
+data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]
+indices = [[[0,1]],[[1,0]]] # indices_shape = [2, 1, 2]
+output = [[[2,3]],[[4,5]]] # output_shape = [2, 1, 2]
+```
+**Example 5**
+```
+batch_dims = 1
+data = [[[0,1],[2,3]],[[4,5],[6,7]]] # data_shape = [2, 2, 2]
+indices = [[1],[0]] # indices_shape = [2, 1]
+output = [[2,3],[4,5]] # output_shape = [2, 2]
+```
Python version: `onnx_ops.gathernd(data, indices, batch_dims)`
@@ -1696,21 +1695,28 @@ Python version: `onnx_ops.max(data_0)`
the tensor according to kernel sizes, stride sizes, and pad lengths.
max pooling consisting of computing the max on all values of a
subset of the input tensor according to the kernel size and downsampling the
- data into the output tensor Y for further processing. The output spatial shape will be following:
+ data into the output tensor Y for further processing. The output spatial shape is calculated differently
+ depending on whether explicit padding is used, where pads is employed, or auto padding is used, where auto_pad is utilized.
+ With explicit padding (https://pytorch.org/docs/stable/generated/torch.nn.MaxPool2d.html?highlight=maxpool#torch.nn.MaxPool2d):
```
- output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1)) / strides_spatial_shape[i] + 1)
+ output_spatial_shape[i] = floor((input_spatial_shape[i] + pad_shape[i] - dilation[i] * (kernel_shape[i] - 1) - 1) / strides_spatial_shape[i] + 1)
```
or
```
- output_spatial_shape[i] = ceil((input_spatial_shape[i] + pad_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1)) / strides_spatial_shape[i] + 1)
+ output_spatial_shape[i] = ceil((input_spatial_shape[i] + pad_shape[i] - dilation[i] * (kernel_shape[i] - 1) - 1) / strides_spatial_shape[i] + 1)
```
- if ceil_mode is enabled `pad_shape[i]` is the sum of pads along axis `i`.
+ if ceil_mode is enabled. `pad_shape[i]` is the sum of pads along axis `i`.
- `auto_pad` is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following:
+ `auto_pad` is a DEPRECATED attribute. If you are using them currently, the output spatial shape will be following when ceil_mode is enabled:
```
VALID: output_spatial_shape[i] = ceil((input_spatial_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) + 1) / strides_spatial_shape[i])
SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = ceil(input_spatial_shape[i] / strides_spatial_shape[i])
```
+ or when ceil_mode is disabled (https://www.tensorflow.org/api_docs/python/tf/keras/layers/AveragePooling2D):
+ ```
+ VALID: output_spatial_shape[i] = floor((input_spatial_shape[i] - ((kernel_spatial_shape[i] - 1) * dilations[i] + 1)) / strides_spatial_shape[i]) + 1
+ SAME_UPPER or SAME_LOWER: output_spatial_shape[i] = floor((input_spatial_shape[i] - 1) / strides_spatial_shape[i]) + 1
+ ```
And pad shape will be following if `SAME_UPPER` or `SAME_LOWER`:
```
pad_shape[i] = (output_spatial_shape[i] - 1) * strides_spatial_shape[i] + ((kernel_spatial_shape[i] - 1) * dilations[i] + 1) - input_spatial_shape[i]
@@ -1739,7 +1745,7 @@ Python version: `onnx_ops.maxroipool(X, rois, pooled_shape, spatial_scale)`
MaxUnpool essentially computes the partial inverse of the MaxPool op.
The input information to this op is typically the output information from a MaxPool op. The first
input tensor X is the tensor that needs to be unpooled, which is typically the pooled tensor (first output)
- from MaxPool. The second input tensor, I, contains the indices to the (locally maximal) elements corrsponding
+ from MaxPool. The second input tensor, I, contains the indices to the (locally maximal) elements corresponding
to the elements in the first input tensor X. Input tensor I is typically the second output of the MaxPool op.
The third (optional) input is a tensor that specifies the output size of the unpooling operation.
@@ -1752,7 +1758,7 @@ MaxUnpool can produce the same output size for several input sizes, which makes
known/predictable size.
In addition to the inputs, MaxUnpool takes three attributes, namely kernel_shape, strides, and pads,
- which define the exact unpooling op. The attributes typically have the same values as the corrsponding
+ which define the exact unpooling op. The attributes typically have the same values as the corresponding
pooling op that the unpooling op is trying to invert.
@@ -2402,12 +2408,13 @@ Python version: `onnx_ops.reciprocal(X)`
## ReduceL1
Computes the L1 norm of the input tensor's elements along the provided axes. The resulting
-tensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then
+tensor has the same rank as the input if `keepdims` equals 1. If `keepdims` equals 0, then
the resulting tensor has the reduced dimension pruned. Input tensors of rank zero are
-valid.
+valid. Reduction over an empty set of values yields 0.
+
-The above behavior is similar to numpy, with the exception that numpy defaults keepdims to
-False instead of True.
+The above behavior is similar to numpy, with the exception that numpy defaults `keepdims`
+to `False` instead of `True`.
Python version: `onnx_ops.reducel1(data, axes, keepdims)`
@@ -2417,12 +2424,13 @@ Python version: `onnx_ops.reducel1(data, axes, keepdims)`
## ReduceL2
Computes the L2 norm of the input tensor's elements along the provided axes. The resulting
-tensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then
+tensor has the same rank as the input if `keepdims` equals 1. If `keepdims` equals 0, then
the resulting tensor has the reduced dimension pruned. Input tensors of rank zero are
-valid.
+valid. Reduction over an empty set of values yields 0.
+
-The above behavior is similar to numpy, with the exception that numpy defaults keepdims to
-False instead of True.
+The above behavior is similar to numpy, with the exception that numpy defaults `keepdims`
+to `False` instead of `True`.
Python version: `onnx_ops.reducel2(data, axes, keepdims)`
@@ -2432,12 +2440,13 @@ Python version: `onnx_ops.reducel2(data, axes, keepdims)`
## ReduceLogSum
Computes the log sum of the input tensor's elements along the provided axes. The resulting
-tensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then
+tensor has the same rank as the input if `keepdims` equals 1. If `keepdims` equals 0, then
the resulting tensor has the reduced dimension pruned. Input tensors of rank zero are
-valid.
+valid. Reduction over an empty set of values yields minus infinity (if supported by the datatype) or undefined otherwise.
-The above behavior is similar to numpy, with the exception that numpy defaults keepdims to
-False instead of True.
+
+The above behavior is similar to numpy, with the exception that numpy defaults `keepdims`
+to `False` instead of `True`.
Python version: `onnx_ops.reducelogsum(data, axes, keepdims)`
@@ -2447,12 +2456,13 @@ Python version: `onnx_ops.reducelogsum(data, axes, keepdims)`
## ReduceLogSumExp
Computes the log sum exponent of the input tensor's elements along the provided axes. The resulting
-tensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then
+tensor has the same rank as the input if `keepdims` equals 1. If `keepdims` equals 0, then
the resulting tensor has the reduced dimension pruned. Input tensors of rank zero are
-valid.
+valid. Reduction over an empty set of values yields minus infinity (if supported by the datatype) or undefined otherwise.
+
-The above behavior is similar to numpy, with the exception that numpy defaults keepdims to
-False instead of True.
+The above behavior is similar to numpy, with the exception that numpy defaults `keepdims`
+to `False` instead of `True`.
Python version: `onnx_ops.reducelogsumnumpy.exp(data, axes, keepdims)`
@@ -2462,12 +2472,13 @@ Python version: `onnx_ops.reducelogsumnumpy.exp(data, axes, keepdims)`
## ReduceMax
Computes the max of the input tensor's elements along the provided axes. The resulting
-tensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then
+tensor has the same rank as the input if `keepdims` equals 1. If `keepdims` equals 0, then
the resulting tensor has the reduced dimension pruned. Input tensors of rank zero are
-valid.
+valid. Reduction over an empty set of values yields minus infinity (if supported by the datatype) or the minimum value of the data type otherwise.
+
-The above behavior is similar to numpy, with the exception that numpy defaults keepdims to
-False instead of True.
+The above behavior is similar to numpy, with the exception that numpy defaults `keepdims`
+to `False` instead of `True`.
Python version: `onnx_ops.reducemax(data, axes, keepdims)`
@@ -2477,12 +2488,13 @@ Python version: `onnx_ops.reducemax(data, axes, keepdims)`
## ReduceMean
Computes the mean of the input tensor's elements along the provided axes. The resulting
-tensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then
+tensor has the same rank as the input if `keepdims` equals 1. If `keepdims` equals 0, then
the resulting tensor has the reduced dimension pruned. Input tensors of rank zero are
-valid.
+valid. Reduction over an empty set of values yields undefined.
-The above behavior is similar to numpy, with the exception that numpy defaults keepdims to
-False instead of True.
+
+The above behavior is similar to numpy, with the exception that numpy defaults `keepdims`
+to `False` instead of `True`.
Python version: `onnx_ops.reducemean(data, axes, keepdims)`
@@ -2492,12 +2504,13 @@ Python version: `onnx_ops.reducemean(data, axes, keepdims)`
## ReduceMin
Computes the min of the input tensor's elements along the provided axes. The resulting
-tensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then
+tensor has the same rank as the input if `keepdims` equals 1. If `keepdims` equals 0, then
the resulting tensor has the reduced dimension pruned. Input tensors of rank zero are
-valid.
+valid. Reduction over an empty set of values yields plus infinity (if supported by the datatype) or the maximum value of the data type otherwise.
+
-The above behavior is similar to numpy, with the exception that numpy defaults keepdims to
-False instead of True.
+The above behavior is similar to numpy, with the exception that numpy defaults `keepdims`
+to `False` instead of `True`.
Python version: `onnx_ops.reducemin(data, axes, keepdims)`
@@ -2507,12 +2520,13 @@ Python version: `onnx_ops.reducemin(data, axes, keepdims)`
## ReduceProd
Computes the product of the input tensor's elements along the provided axes. The resulting
-tensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then
+tensor has the same rank as the input if `keepdims` equals 1. If `keepdims` equals 0, then
the resulting tensor has the reduced dimension pruned. Input tensors of rank zero are
-valid.
+valid. Reduction over an empty set of values yields 1.
+
-The above behavior is similar to numpy, with the exception that numpy defaults keepdims to
-False instead of True.
+The above behavior is similar to numpy, with the exception that numpy defaults `keepdims`
+to `False` instead of `True`.
Python version: `onnx_ops.reduceprod(data, axes, keepdims)`
@@ -2522,12 +2536,13 @@ Python version: `onnx_ops.reduceprod(data, axes, keepdims)`
## ReduceSum
Computes the sum of the input tensor's elements along the provided axes. The resulting
-tensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then
+tensor has the same rank as the input if `keepdims` equals 1. If `keepdims` equals 0, then
the resulting tensor has the reduced dimension pruned. Input tensors of rank zero are
-valid.
+valid. Reduction over an empty set of values yields 0.
-The above behavior is similar to numpy, with the exception that numpy defaults keepdims to
-False instead of True.
+
+The above behavior is similar to numpy, with the exception that numpy defaults `keepdims`
+to `False` instead of `True`.
Python version: `onnx_ops.reducesum(data, axes, keepdims, noop_with_empty_axes)`
@@ -2537,12 +2552,13 @@ Python version: `onnx_ops.reducesum(data, axes, keepdims, noop_with_empty_axes)`
## ReduceSumSquare
Computes the sum square of the input tensor's elements along the provided axes. The resulting
-tensor has the same rank as the input if keepdims equals 1. If keepdims equals 0, then
+tensor has the same rank as the input if `keepdims` equals 1. If `keepdims` equals 0, then
the resulting tensor has the reduced dimension pruned. Input tensors of rank zero are
-valid.
+valid. Reduction over an empty set of values yields 0.
+
-The above behavior is similar to numpy, with the exception that numpy defaults keepdims to
-False instead of True.
+The above behavior is similar to numpy, with the exception that numpy defaults `keepdims`
+to `False` instead of `True`.
Python version: `onnx_ops.reducesumsquare(data, axes, keepdims)`
@@ -2661,7 +2677,7 @@ Python version: `onnx_ops.roialign(X, rois, batch_indices, mode, output_height,
Round takes one input Tensor and rounds the values, element-wise, meaning
it finds the nearest integer for each value.
-In case of halfs, the rule is to round them to the nearest even integer.
+In case of halves, the rule is to round them to the nearest even integer.
If input x is integral, +0, -0, NaN, or infinite, x itself is returned.
The output tensor has the same shape and type as the input.
@@ -3072,16 +3088,16 @@ https://numpy.org/doc/stable/user/basics.indexing.html?highlight=slice#slicing-a
Slice uses the `starts`, `ends`, `axes` and `steps` inputs to select a sub-tensor
of its input `data` tensor.
-An effective `start[i]`, `end[i]`, and `step[i]` must be computed for each `i`
+An effective `starts[i]`, `ends[i]`, and `steps[i]` must be computed for each `i`
in `[0, ... r-1]` where `r = rank(input)` as follows:
If `axes` are omitted, they are set to `[0, ..., r-1]`.
If `steps` are omitted, they are set to `[1, ..., 1]` of length `len(starts)`
-The effective values are initialized as `start[i] = 0`, `end[i] = dims[i]` where
-`dims` are the dimensions of `input` and `step[i] = `1.
+The effective values are initialized as `start[i] = 0`, `ends[i] = dims[i]` where
+`dims` are the dimensions of `input` and `steps[i] = 1`.
-All negative elements of `axes` are made non-negatve by adding `r` to them, where
+All negative elements of `axes` are made non-negative by adding `r` to them, where
`r =rank(input)`.
All negative values in `starts[i]` and `ends[i]` have `dims[axes[i]]` added to them,
@@ -3091,10 +3107,10 @@ and `[0, dims[axes[i]]-1]` for negative stepping.
The clamping for the adjusted `ends[i]` depends on the sign of `steps[i]` and must
accommodate copying 0 through `dims[axes[i]]` elements, so for positive stepping
-`end[axes[i]]` is clamped to `[0, dims[axes[i]]]`, while for negative stepping it
+`ends[axes[i]]` is clamped to `[0, dims[axes[i]]]`, while for negative stepping it
is clamped to `[-1, dims[axes[i]]-1]`.
-Finally, `step[axes[i]] = steps[i]`.
+Finally, `steps[axes[i]] = steps[i]`.
For slicing to the end of a dimension with unknown size, it is recommended to pass
in `INT_MAX` when slicing forward and 'INT_MIN' when slicing backward.
@@ -3165,7 +3181,7 @@ After L is available, this operator can optionally do a reduction operator.
* shape(labels): (N) where each value is 0 <= labels[i] <= C-1, or (N, D1, D2,..., Dk),
with K >= 1 in case of K-dimensional loss.
-The loss for one sample, l_i, can caculated as follows:
+The loss for one sample, l_i, can calculated as follows:
```
l[i][d1][d2]...[dk] = -y[i][c][d1][d2]..[dk], where i is the index of classes.
```
@@ -3476,8 +3492,8 @@ Otherwise the input tensor is flattened and unique values of the flattened tenso
This operator returns the unique values or sliced unique subtensors of the input tensor and three optional outputs.
The first output tensor 'Y' contains all unique values or subtensors of the input.
-The second optional output tensor 'indices' contains indices of 'Y' elements' first occurance in 'X'..
-The third optional output tensor 'inverse_indices' contains, for elements of 'X', its corresponding indices in 'Y'. ".
+The second optional output tensor 'indices' contains indices of 'Y' elements' first occurrence in 'X'.
+The third optional output tensor 'inverse_indices' contains, for elements of 'X', its corresponding indices in 'Y'.
The fourth optional output tensor 'counts' contains the count of each element of 'Y' in the input.
Outputs are either sorted in ascending order or optionally in the order of the first occurrence of the values in the input.
From 1bcc11dcd78eab481e65fa544d41eb39faf15fa0 Mon Sep 17 00:00:00 2001
From: pgleeson
Date: Thu, 21 Mar 2024 15:33:43 +0000
Subject: [PATCH 2/4] OXXN regenerated with latest onnx lib versions
---
examples/ONNX/ab.json | 2 +-
examples/ONNX/ab.png | Bin 44513 -> 44145 bytes
examples/ONNX/ab.yaml | 2 +-
examples/ONNX/abc.json | 6 +-
examples/ONNX/abc.yaml | 6 +-
examples/ONNX/abcd.json | 2 +-
examples/ONNX/abcd.yaml | 2 +-
examples/PyTorch/MDF_PyTorch/ABCD.onnx | Bin 13273 -> 13274 bytes
examples/PyTorch/MDF_PyTorch/Arrays.onnx | Bin 949 -> 950 bytes
examples/PyTorch/MDF_PyTorch/Simple.onnx | Bin 6276 -> 6277 bytes
examples/PyTorch/inception.json | 3718 ++++++++++------------
examples/PyTorch/inception.png | Bin 780160 -> 797110 bytes
12 files changed, 1703 insertions(+), 2035 deletions(-)
diff --git a/examples/ONNX/ab.json b/examples/ONNX/ab.json
index 71bcd02f..1e96f3cf 100644
--- a/examples/ONNX/ab.json
+++ b/examples/ONNX/ab.json
@@ -3,7 +3,7 @@
"format": "ModECI MDF v0.4",
"generating_application": "Python modeci-mdf v0.4.9",
"graphs": {
- "torch_jit": {
+ "main_graph": {
"nodes": {
"/A/Add": {
"input_ports": {
diff --git a/examples/ONNX/ab.png b/examples/ONNX/ab.png
index 9571df211c3c7dfa35a3d3fc46f0081e071d4f7b..3edee23c99e87323807cee59057297354a7b7c53 100644
GIT binary patch
literal 44145
zcmcG$1z449*Di{!gi=a_Olbt98w93=bW01;E!|j@ba!`4hs07;x*G&Ux{+@7n10{>
z_dfqQ*V+4=Yya8oNqkixyKmyeUCRlK~4e-ofsVr4Gl|55~+lScFhM3?MmnE
ztMHR;?}D%JpPL5K5=gX5)IVvBxshmS57DHMA}TIP>r>9!_@fiIwoUZSr17%fe|+)&
zBNoP=o6nxfa>&|nJPVL&P*-l_>`u)4!znIn!14XTenFYecVBA6_pAQz-q1-ZKgE8V
zeP}c;v_&ze6MG}(#sl@)!}IyjsogHu#V*X_`S$qu3Q}MF+i(?VPvtT91W|uLTOLse
zMP1(;p>*;as2_O#7MeY0#Z=b*d#I})esIR>!+Om_ih+S4@%Omvm$@|yzsWIoXE+nx
z2f@tDj6fhLDMcP)3&ba1eG}p1^L?!BRjy1toApngYuA1fUO{`4dwc0lgB}G5$!Ghe
zo{WqPoJZm^aZkRqw6u(mk9TySS8$>w2kXa8OlS(FrlzEL&HV_TpEsnX&CQmMb#r%L
zb2o)EhB{aab}a4ISdQDRjWFry%{b3^j*gGl?H=4fEB?KEeCsZ$zrR1Bpxa(MLl!%=!ZMmp3MBMoVKosdv(HCY`I5iwk-9!kvle5M61
zbhD&lwI73k*>b9W--kf=Xs44UI5?R96#-hGHEw|6V2*5mfB%&$S6CYNBAWcJp`lG_
z<~H==GT+%vWH^%n<|Hpa%rO-)TFCD{k-yIztdx8((x18X&e*E~6kdPpBdA3{s{5G2Wz_$QASy|avuU`G@
zi%`EpLron_${W$S5p_fMN?OJ1x}V{XB;bcD?hnz>`Zkt}$h+IyiJrWCQ&Rr+Zu`Pc
zs`o`*eZ6L}AT{+fhUnF50!A8|#hr!DSQfn|w|#T1%ruE>H!<6~yN3_y;C?3i#~qBH
zJ@ZLUPDXtJH7)8mT<#Yu_B`C2sxawIVSVL$*S*a5E?K?D@lH|_sfC3F1w|^?Gm%hB
zD=S7u#(hE9*l;&o!VC`r;p!cuu^g69_Sa-n*JP!oKf51VD=36nyO$Le7oQ$%Og7Oi
zhsVcjh>5*#lDGmv64JA(=&QeNe6H-T9|uQH^g??iZ2%oa`1sO|$x-R(w{LTVnYtDG
z2<}R>3Xk1{i^gj38u~u<*;yVD?&9WFz_oLHwzubSu$!-(D;`3cAE^uRQ0uUwWNmF-
z;XKQUI#1f!te?QDGCOGk((-TwWAvQh%!95bL_3T((1j1K8LFnE@h%O1YHU91Ek+kyU
zLLNt2HjSJfN46W8urK?P)5$kFc6N8ArKKAi8dOMnQ-y^sMjj*F7NXV1i2Am-ZQnla
z3)fZ3-`kw}jGX`V>!0dRgLxB3)DJ9T$UDub!~03N%!F3pr@zDL#R0~FKO50w$c3%wN=(DQjqwBgtf56fB!bP>o>Q$YHn;Sj+By;3J(u=
zb#+Cm=;$OzMn;B(J#kzesy^7++e><){}nzCXTl~W)z;Kp{r!7qdsfWEWU0Qs9yXMl
zo13}+>e6))JovnNsqX&a;ppL0SOoX}@!qndGs|We*Pa
zl^7TqBPFQ->Zz%zM@B}>g((Yp(cK8{->!Te)g$`Ppd>$(H(4MCL=R_om+Hq)TCm-DKQUBe$
zv;Y6|3x+t?6yIje4quksZ6X+T{T$Yooxok|v@ua_HHnY@tfHUadDCNmSl!FZ%b=_l
zPkfZCKn$YM{dggE#_M#0K+mJUw|6TuRLI=aw2wy0$Y{ZRtFgvzp<`6ny}P&91|C$0
z!iyI#UcP*p=t~t{ZRf@x`r@%}wWS6ij%fOea>IaB)G-r)N6QzXyl*@4nhS2IzPdys
zRFg0<+0$gB=*t|x4S}#WmdXHegoB;{N=&z*MKn>V{t_2p4VuN;XENgfs_2ZsaX
zMu`woUWetr`S!?)@^bidhHT32QZE3uCv5%J5Hfpb=X-bV;Gp|hn*InPIzB!Q2nd)6
zHHW{K>CCNB)@k-?tgNhr2)KFkrdEyh`NioLf6D*$%ABf$%KdKH_|@R6(&5i|9!?GA
zD|6D$+SS#Qt0(4TsXoMpOuAmTl=29nr8NoJ$(|?av-6gDUBbs9IZZ9`7zqdOYES5V0~e{N}~Q#RZV!atn`3Fo`P}3#t)j
z`Acd=6YS0`@2SA5ISu($?sxungw+k<8xu7w=bpH*fx!*eP1VO0_h6DT%ySAkAz%
zbLKZ%boB1Cx}EjsLx+n~1&V8Tmn_AD-suNnVwv1ad5T=xnhNV!Fc$y8ek(+luJ&+N
zuNfozH#^a2FeauR`S0TDLqO(_A3xG7=j3tabLv%lp<~l2#N#14wqov-@=~uC8{dd0DE^;fsrl177yE
zx0{{ptw0@sq{0o5OA+8T8_1e&bmtaSG#3MWj^lZq!M6JAS6_{d9svP?RZm^H@1u;>
z)m2{mMM4r1My>Vrbr*qY*pV@hH8ObRH8m%Eu}F{zMI`ciZ0CYqU55m*ho@)2GNkvq
zx;i9>aTnp+r)e#CN*WsR5Ku8OKQ@!6GT9O%R2UlVm*nj1%8QGOV`F10Dk|89_6qXJ
zLPA636MG=>Jaz87u_RH!h#s4A%Jl23dT8Jl(m{Z8b@&*F7R54p1=efJOnz=4&
z0G8jlaU<{nlaj^Z_O^dWh;-1ZE;bB*h>}^bPQ&DA=
zlx)Je>jf!&eSIN#U}wPre}F@_1~S-;$F8i7&QyczE}`%7cFVm|ozFjf6B^CU{RT%W
z00R5S9KN_YJY|Wc$YD|QO6Im4XwZ)g@TWB1TGJyvIUU72#T0bs_Vu5p5VY64>~}Pv
z#IavEm*tMJ+_xjb_h4aW;2!@vW?AbgY=nCYBeB2d+Ia7rw%^{?3KFS&8fvhlr^<>M
znUXSHsLo;(C_{sB=ME8vX&4|wY3UZk>~OVS6VyHnosEr+@87>4{{gV>=IW}Up%F~P
zo+F3DYZM>IW5KMraW3IUg}VzV(?oEqyXKKsQk)>l7jKYza5O}(7o
zJQ2ye{B?=D1BtRN+y5-
zqoL5vM_N4140AI{NFRX4F47Y8&{#-8y#zVfX2i2J4Cy;-j0bpFbw?
zOrDk7OtnUIbTn=NvZ`&)t`7D!)SjWCp|lwgxe|`<3ym(2b`uha4|btS)blzvLe#q-
zGPgYGAQ^#(5Fiir{E4bNDLl5X;84@hsHv&Z(9q=M=5E1Lg#df~`t|#pSOygG98aEX
z9~@LyRt_2G6&SK7Z
ztBe*FYdK_f+9h_Pub2DMAy`5~LSRD{WoEu|bv>TzFCfKm$M{@SRP^annEzsT;;7{!
z7nbS@YDH>o^(*9)&HtITy@^F!so!$b+xrrLG_PssdkB-Tu=jjhxK)SQ%a_}Wi8dF|
zk--BZV>6VLlx%}Xp?a8i_VWwgKtW|CN4$)&scD|2K1B%=9o@rV_2PV|R8&4G$+hSS
ztxgQ593AUx``%GKMU<4B%u6a~WmRlGcl2itDrp{*o}{j>F1Vbd*%rTq$g?Hk%R5+D
z=Kwhn#|0-TVZ_X_7KVm~Ha5LYH=sBzCRSyx8Olj7mrE5YJ|S1ticd^*hdRBOvO`XO
zVr;DPQ9303;^GY`yr4E_di>Z80(RY)On|c3Al4MR2IcA(8X7n_?h?A4g+Er?{zjvL
zLTH5KL9J!IMjU;(v;;5KLj(q1%8P|NlD_)$GMV2u7P^Zad*}WDq!$zxQY;Y!Fl*Po
zG&4ihuh!EIb?NDEJ)J*O!>(-&rSOLOJ8CPpu;d$kxMWo7;NW1k(h4Lis41;pzO?=I
z&y@vC3t8FjXhwAo3YWq3a#1m{08>a)9gRy%OBQm&Dypi({y4$pCB=Dp4-wG*GqLAg
zM4`oYdd>xNTmP*6(@Nppl_IF+$WE3LriMlM&J_;XlOVRSK)
zIT-dE<)Wo{SNGR0&9YBwkE7T>>IyI->Xno5)Waxa@-HL$-dwphrZxN6NAxw^PqpO^
zbQ`Z<>u(L0vm18gVnSxrW{zz)gw2J-$HRLR+i5dZ?|6CP`SvNQX4BS|LGJJGXJ=u|}*FIw&ie=2;Ws;1_3d45MF7V_{{!?y9|Hr2t!>v1K42^A=;(NQHtsN=)7DOkiuxvAs@J>;)yK+8{%irKJ5Ryiy_?9cU11z7hfBui(AL`8+TATn
zJI?n`PwS7CrGnj`^N+roHRN{>J9aPa*=yAve3!v{lS;}Y$U62ZjTF}c;%P758;v~o#p9UV{-
z7Uboudkc_uEt$}owy
zJMKgFsODT55t$P+zo;6U_w0TUqLD+M4u=)fzByv%;eAFi+&Y>4!i4*bYTthR5PF`W
z;E&8WIXUU-l5u*Cuu@T1m(Er?u-DNcJJkoe3;K%K=h?v0LE`N`=;-Rg4?q?KLwSVm
zBbu0Qt_I~L5NV#L2gX1KW_UG
zp4W3&5Qv0sAvaSiD?!^|PYBX_93ST-
zX|$dF3M}I4{Ct)y1LC(`ow-DKOpLLOO{o(@6v^852h)~s-wgkGbHyC*pP?O9Wq#7C
zzc|ip!O^$0
zEIJWGo5+RYV6)t)6Z*{>h}Ort4KJ7`h-kAaI0Z?dNSV?;ESAVdKfajMFFRhjD%B$2
z-};g7)1qkwja<_5+FD`Z%!jCU!=;{NX*szE=sr*Ys5kLu;c4D=gq(Bj)?G`Bov0{$
zY3WYXwwTI?&ILkB=w$iPaJCZv{s)H?<6czd&)wR;C|=BUc6O##tjTFMK$UEG?F}sp
z1A{q$CIO4?(7)J%m8RXl&HFaT)BjUV`v0xVSD(3tHWk>JbM~=S4~u80C276b^b;PD
z9ULkf8_%QYm0>qI2;YUL<-(=!Rf$17n(C(6N9#SXibtic
zaL=A0KYc&f?Du7V9a3j=d8AlN&-;9TRL?s(K0e>I69QKl+6y4O*tobZeh0Fco0~iE
z{{6NXV7K1!kS4H26}cL=WFAj*7|}KE3y_^h83(0#ujhnOC2yFHebNV3O`|YzVPRoo
ztZWXV-xgTh-CbzV`D}l^>A8twIeK}9GIe6g(%m|L(<6~@S?ec#%3cH2q&Z~?Fa`7q
zti54#2r(xEG!#WeGA48Ip*JQ(tNqqlMKS(uwRIAb)VNY&orJz-^ttK!k>xc3fMkH1
z3+m9~#LL9Q)8cbA-h5Zc+>2LM1mA2+bFl10|)-Gfg)hFgrckmozQ!-}~OJy@)*>6kPH#dTR=LNxvHYVVulb1+5pLi*mPP*1P%^#5~ZGZ
zP{d-Oh~PxvAdtZ(BQo<{k=GC~Mp1xCN~0;n(6s@eSgJ@AD5HYgS>
zaTGVE*f}^XHG>8&)xM(QOTlO9znZuE&yb+(oG)?d_=e%eYA~&Jm`wVqE>cB7grui0
z?!FaNm87$En%0$qsjw;Jz?Eg@0rGKo^+W5kC
z0Y6genJVghK>Ky}^jLElzI^$_Vx$Om5oijK|BMB-8~h$p||
zN0?Uo1L^0vwHsOKQVQ10^s++S%V_TAygr{KU3G)<1&AcGSFZ*KZW@&F(9)VfXPzJi
zaS9#F$$X?dCMKqT+t;!6b$$yIEbE-uk~3RQg)dKvt$)n+MWq{dKO!OTKFpyPTPa>z
zShn!a+5ba@4c5{;Y?pB8QU0D>$`Es
zksA77NH`{hITE4dOjeOeNsq%Ne}s?~Qf4X8HJxsL2BOAk@g+$+ox97nQ|-Yd33pvu
zU@kV!jUNKm8kMH~HqGZWxB+}Z0x0G$XKgf7lr_M}@YQr4D_7u6-u|Tu|GeY&o`d~}
zj+J#;P45K6fTWe3teB(}BAlZ#$jqv~1nRx?bdf3_eL|tsULkK3m87DQC7TDeNOg70
z`Om(-u+-F2OjUJts2lS)Gt2_nE=zPi8~3KjA^~zO>qT^RQ(inqu}aWR3~?!7eE;@M
zMMVWP%x86PU(g7~J8?1`_P@fbQ~fTii(+ZzRQtEeL@s5U9;*tt&F_ggwtHLO#8dS6
zGDT5JE|^06y~i!3+n+bt>-?;dWZf93N1kLaK@P^Z{x7DueHTCJ3$!DVuOOiB!{}`@_JOgLL7YoAv0;w
zrI9{n67IuXeW(0pU!R`1@cHESQN&iF=kq~b4arOT7A`jSOBrMz*h1e3eH1dwsafdpaiZDuL6*l+mU*pcW@-xqdRa^SEzc
zM-Z>E9Tm^5F@C55de;a=FKR2eRYu?GZA%kW9_JK=z>x`Ro5@xD?B^%p;IIc}|3o+*
zBnphRll^5Ms)-}H%m-fgRd2Ud+{5X2k2HPuFu7J_OLo(=6w9RTYgAK`ERw_KYO-_|
zV}7xpN;hzckOr*;{{eRE&+!?j#fU9W67XMLkYrYe^2K~H<3W9cB4Zd@HE3X%V?Ub@
z7rc3c<|JB~9R?!#yBiqLbQ}z;>tS#^Tm5%0KsNBs-@dJnS4hy|mw%_jX@m`wlM~Ev
zn8SpOiix3l5ZD?-WPh}+Z>}8*OyhLD<65Fk6DY4h>i`?l;U|%Z^_S-pvYVG2J`*_JW?FTNAtrMfVHQC62{PctXxp$a3liWyBaprTp6`De%XM1Sig!oG+Qok$sZo+kI
zCNQu=Hf6}>o9(HWb0>MZ6zOA2asGP^amkWMpcS<{rKvOW&m~^_XD+C=m1foxPkTBx)OAASx~D$rGl`CF#{l}f6wgI+$4vzyRrX^)BGpJer-t7mh2+U
zo{5Ttd-Q1ktZBmfpW5#&+52I{=?X6<*?cnMRdYb!z$GO;gXaV`1O!M6+RWU1dsy`2w&rIft9&B}`yW*u`A
zliU$Pe0&c3MH!4hi9#h+6%~IZ0w56aTt)^22!Q9^#qLKLM@L6M#=fr8?xsZiw&h7l
zNdfk9|3Te>7u3O^G+>c%VJnJ@`|cRggiBVwo)ZUa0`%nw140@;KDVE>uSxGph~*U&
zeDv??CJ}5eRC`|Fxj+=(h99cVt{NQdur7~0tN-Xk8f(0+%R_2c@rav$$BuT(#%%lw
z@q+j@G776#s45ZEDD%NaxbG=#f>H^w$mLOIlB(;}eV>IY*Rxxh=#M$9$Tk99v_q>i
zW1k|CDoTT0gd87+P}>Sf_hOsejGUZB;P%zMf&&65Gx{?n0CqhZ%F6ygXC;ROks0*&
z4q5`v5B`#f@TjP$?PCvL{Rut`7x*|xJ2Jq8o)cqZr(r_NjDGzpMXE%vif)4oD=qC3
zNLK@-c1})@+j2@MRb&lWkwSI-I8J#{ndQ_{`74
z^8^!*f?^98jbFc>BH<2k0+1l^z2df@|Avl*rBi7d{`4XE7^o5PG9z8OiILIKu)bN3
z#RKv(GDJ1Jj7&^+T?7t#_)~kEU3YWdzI_Xt&LVbbtG%6FK}m`A;W17-d8ws$#S>*j
zY=|B4Q{)30fwGQtR?J5x#DX}KHk()OY^V<7DP~p~GcqvvxApgz`H-VlM+H_#k9c_5
z;su*VykaR0OJoC2gdzF_@pB}8#88Uv`y|s2g?g8z{)k8=qU;5{7VPGaKgl#Pt4=;=
z7PGOk(zCFb2m_m412SWwS}`KyUu=g{9W~;)l+|ve4Dj*`yPu5CNl1#P#_uo4^b_=rJT3T9w5(=mTo0!-f
z2fzZ5(w^>TN8}K($A1oQ6DKbG^rjU!ezdCMbpo-)oCAJlXf)F*Wbug
z#{d6DIZ9o-c8%L|>>ssASt%ox{v!#aW7Wz3D5D3
zbz$IWW`PCQIgwN-Tp1_3!-PKWLekLCkG2_6RIC)+_|=*cWe20
zc<+mK`C@q89V?8?a!uLvyC)zQ^(OJ+5fQnrkFnNiH#u!+f_(<@F*MZK)^>U3vVUD0
z1Ou?80M(Qx@?H}i+I0&MU7_C-5)uL(3d~1PoJDsoc%JN0X8ii~3v5{Vo0?S%KkMv)
z$p=+SQ883^2gTz+5l!S95E#hsxEiHExA*5ySyk2f(Of7f2(w#TddI?Ies_nl5gEEJK}x!z%@8oihFpvXb>=tR-Qff{69Y;cr3%ua81%%te$4ypsNo
zz|(`7yQd76hr#cHE*lPbPe;y3Lz}V6*3K8p93NE!brl=pNY=&9cFeS>Qk*wE>Y0^X
zw~tEnrH|QWi2(+{Zzj62tx={?>A=!{e=jY_qdE7+q}h*~2|I${jRtdW?77e73YAw>
z0Hxn9^aul@`FWC>y2fY(i0}ofg*37WS&^1*QEE@cbS7ucMTpw(&s)nM-#2{KzueU9
zt03_NQ&~wVDa@hu6s^gilzavzfxaD;h!dX8sb
zR8gw79nRbh?3O2xBt}Ke&|6Lkkja3y`^U*yUP`L0veE+x_{n5ci*M0#?&f05!^dad
z{30U!?QDi3`-9-sK3ONVTi*3%N87Vx6xV=IAn-l@ulJk`8^VqU2V&o+FTfkt!wzK<
zQ_6RMbpkDBV=NpTaKs>MfquFVdZvho2y8p)n8)qCz3SGsd&`O&6(U-*^>De%*sJ{y
zsHv!u1Y8Gx{K(11`eyK2POb-3s;WD~BO@hXMu18i#p$huB$QWIKL|iV&xl7O69*ML
z(0i$|u{Ex{#tSHsw7gvD^h27UwlfOQMFl4sz?XM^ddQABm?;q&Q!OPY7b8jasx`!l
z6IwN(#103fS+KxWM#gWx3MId)s%l4lE&xYJNCliGX_Z)O2Jj;glf9ot%)iN
zASRJRAhYC<0HGc5{dXd_&0WGLPp#e?i&-wPzW)8KdNYq2Pz|aMjl=n+rQPE*B6eds
z?oZ|A9c^u<)hsPfu$aJv|2v9~?+d(|Dy0YuW16E|n)Lns45V*vdx&i`vm3sh&N3Undl
zg2WFVIDwuCHc=2}4}o3E4ogm+if1=@_x?TD3BkpAwmsVdWI7b*DX?T97x)XEuCoKz
z%A{3I=R8&EbDxlq2XJ>_KnbDrIT^n2$I|XCEMkYf<$fqnz?M)$43!!*>u2YOGl}G%
zeaXz6pauXjYJ6<0ZD9u-I$%__0u>ZcTh&@YSy}nTi!sX?&@15QRZ*@G=}_TW(4Tx`
zj=9+#LdnB(1pORli{(V+N@??j8*~d!P6yzY<#t$BKunaq^7HZWdH=p8M>Y}A{tzl_
z&O}}Tfiv)S0=PGV`40R%pnV+mdS66hIlleZ;~KCRthX3t7D{b4?k2wL#}1S^2t0C7
zd$)yALOpW{^%r~{0Ulnt&6BiB*4FOLNO*gEC
zk>~E_=Pg*Kg3U%V3>;_^AP%5~A{Fw8BaX#LPfrJ1!3l`IJftDeoqHUY@ATyRDgZwM
zPNr1%P2LUXtoWx7C8ed;CTq3;*_D))L2Uwcdzjp$2n>6rr8d={;YMLwlrA)>2(||f
zkFa(na57R+A=4E=J@NvGMee~FbU}lLWriLY#S+>t2Ic3QiHTkFYcb>61%EJ$PW>K0
z-D{qtw6uppsltGjTtl2CKpFm{ES-9|7WyAON*OgMT;K=*n=q0J`!=`4|AP>LYd>aI(
z_sL2w6wi6q_J^A~?(V0?pBeE#PQ>I-T~~MeN>J$e>Io}rJf9=O=hq-Yb#!(HfR;js
z{}JLHuOE;Z`prQOdvWm}@L)ig>M<@UDFN?ri!W9tv%?hNaEj(h5(O4IdIGO~j*ric
z(<-P;sc^xci>qj0kRh6$S5#!UMFx5#xU4^RE|}lN!SQSPv$q!|Ni}h4qB{&Bs;Q&X
z2)aK4cD6#31p+q!veEL^mW_ddC=#;;DIpO`!X45wYkC_;5hO%NvlE(A;atCe{|54R
zYHBLLzjY4fLPk;gWv^NV1O=lfffEWMWX%}g1V;X|(}p?yG3<$@uK3K6wkJW;EHPZK
zj()wr7ezvVkMH8@iW3eZ0x+o%&I^l+2OW&19EryOB
zV?ayrv%I`)DTM+Zw!67Jw)wPTA&}dk)~Tiv2L%Q`LJUv#0bvXBZ-;r&96CC>D6D
z8Bx1mbLfg=6(DC}MIGV2OL+ZYm4LrfaV{QFQdQjt#ADX1lpIGq2n73b^kFy%+CHNC
zklo=hHKlW(~#gBXYI%s{vpR$etSL(Dk^At0q^qirtciH|`
zGvlQdzYU%m&<%;*DKu!|mQ%;Ro3|1Ij;N?F2)OQ0#daEPP2IY2Ln>Pdu{xL=it^dk
z3%W;ic65wwfV?msule@rCz}}|0)kl3l`(w2!rph-{fps-%N6A3|BGg#S@^TY|7J6>
zjy3rI)Jm*t6XSpk;SOF@%$9H8ze9$2PKSSTbTl?G;dS<>6*9~A;bG}nYPy)8`E8tV
z&|-IpM68?~zzFcqKTi)<#5iH9!0#SKlkM!gw{O1yo`ZXIl3F^36z~#qj8ZBl;Ginwz?bq#WHsG$tnwV7ndWT@}-|YA1T#?=?mbOA(FT($86!EAXmvl
zs)nqEj7m%-Ata28i1=#kHS0rQX=y2Rw%q~-fd!b|0DR))D;GGcqclGvEdk6cj|w0Y<3G4tSQh8ye16nho;B0dt&Iyw`9+=hyPu
z!C|N&*f+K`nvyxTuwE0}ucfTITFr~|f9?S+wNX7lFu7oY=Y}{5pslz#Gw9Eb+Y@-V
zuQy+utbM%CI8phn{s47#YlOfpZ%#L!g1Drmp^?v|SJ1$`&VN*(Vv~ax4g}l|SIq%I
zO(Lq~eAAP2Ib*Uq!9yz7>~#*0sk^&dYw86eJP5oFp3edl=zcc3J3oE;)LF|bg6ULc
zRKSh+7wR#a-rG0L-QC}pm6KD`$;+cIsBo}CNwrKm^_8B5B|_9t#(G^GFU4fQQl@fQ
zC75HhJfVd(g}nXtt%#(gKP3J{FN~&&zZJDdNs&hxh3Lep^g?pV=96DVCLfl*Wk*~=%x1G(;hnw0uKW>Ej&R!MZ9
zpoGoN)x;zWKl=VAPr}Yls};K7=MRT=$NX12-!s#HKqS*%j$SFi&*M(16=Cz)bjMK?
zxnad4_DZdi3054fTU4+>jxq+{azYQ!v7&E_h2^leJk`8^he=B?Ns-d5J!{O
ztn^YlSXt|941Q`|K3xJ&rt^xOG1zjrX^#>p@gm}`OZhnC>urlMoC
z%&AeFwugLSV)AK^*j&iYR!gj2srFGAepjRvqJzF|rm}f!HT#F0Q=_r$Z%wToXY#dQ
zC8|0yl+ClN!6JWDbmUJFjz0rAUq1<6n^}b6Av6w*{k-~}H0hX3!Bt$PXLT@{^R|W4
zQX+4TxSS#;8Qt4J3f+s}aGz`(t}>=i~+p#?bwxnu
zd8iLh(;=UYr0}2@bM#Vay~j>9Wv`4M(frlf^MynviMy}2y|Zcf%I2Gws_vMjg-KRi
z;~MMUx7R*;{!G&0$7JW!h{w>Y4wu5_;EuavePQQ)#;ey6<%Fy}Ymxiq1ry|vt+*{O
zg4{~js3!W}waya9kl$na>GUnkyx-pX8a+7fK
z@Hf`xXxMq!R9D-`509~b5|)v0-@IAOl+pf0v>}MQt>#~EiwsP`&kYDe|B)ToDLC!q
zR@VtAYz(*L)c+78#Ss*?(Nk5eQf+HhJM`r#S{BR`S#;SYxc
z`_l19OqU!Z+KGd(>_yD-j)a}T*dm%Bh3Yu0^+9vVYd(_Nj`T6yW3uB0hleZZWm&(q
zsc^zY=GgkvpquR*`qq~;y89i(2Drb4$eMuV{&R=o$Vz|xygzfF(91sf2f6JlOe)af
zk5tQ~n;PTf@Zq%IA9_EU_u6}Cj?b=%J3g^5Rgr8%Rae3<)2zI8?!mp-5iz*TG_0;r
zp^Pv|Mi%Y+3-~HNt)G~*EjoRZ&g#E$bh3&$Mu@~kx37iM>`XX65rl5WMkuP
z-F}^KGSC{}kGyw0fmw?a-5Bq_M~l#Dv@x9T3-q6_+V6Pq?0uYn*GVD4PfAM4m6a8y
z1&w-#eCV4sU%g=KChgCA_Bb0JA$e-gg-sn32?j_XaWZaR3a9Sb_rp}=4%9+F)6x2H
zet?U7N|Gl*k&oVJ%s<+@a?acFV7BpRkn@F(lYmU8Tjen3$CI0gh#2k)j%?pB8oL`f
zcQ$`koStS%X;|NrMksl@=8^G;YloDDmt|T>M9{_ajK3~^ta49QNktr)bFMR7o%B~<
zu%C<);0W;YtWlI!xZ1cT)~W0EGa^EMaHPM
z8#?9go{)rWY=w7cuN6MRqsrFo#;;EJGSuzkZ*CBwQlqa&GKxL$hI=gdm8D9mgZwz6
zuz(s7s3_*&)|ZqU)wJmSnO)868=L^4x4#VHgTc*!sRs8ivvD#bt9
zy}}xlmGw2NR9Tu5tC_!J!^}$9J10(0(|K2sQ-v)4C42XzUF9ttBK~nUTr;bmWCn2s
z+i@rJGIdYs2yPSkyd%qf@<_s#UyL#wH(dK+6uz+AmpmG-fTjK!`0!-Y^x(*M2}1oqU_|
zXg{s8+!sHnZs}pqWy(e2=V)Cogf>8rOmBFMN}T&1H;0Ya2JHs>UL0GK5a(x(Cs_=8
znp52zV$vzKpQ6mLcW)jJVw;;aBMmuwlqvE5IFm4ypPg12Lq3j(B}-5-?m{RD!HK
zzkR%;Ou*E7_tD*e-9UWN@Hz?aR}zy2pVsbwQEXWH+3j~XFQA^jV=j4cNBp@Y=RIbR
zP)p4!*Hn5kzi@HXT^J`bLGkR2;dw{q*XQ*~&u1D&7Ihz;4iqUmtX;6v%RAq0
zPzQw4H;3+tXES&;rqAw0Z01b;bJV;@a80N
z;*-KV$4gx;F7pCpiM_1d#jM|^-{}1|-b5x<$!F`8ACRfzt~+2Or+Ed->QwL&h6~*k
z#O-d>O)7ic7=OXR0H;a04`>0~Q2;Ch|6;fUd!|iRVVV=%rX}T$Rx~AdLxp$>S++|k
zv);Z9V70MrHoi(4GX6%Qw5+0ZS~$cOYj~t=iSV`S`=5Mq$OYDCRt
za2h*W(pw
z$C`nKuQx9!S}m94*sHhpE3t4>F!NOsqN4V`+@e_i_jEIy7y7#!7!QhGlv#695B+y9
zfB|Y2Qz}myvn7vNGt1Yajn&jDrFM>JLPGDLttSIrDR<+RIZskZ3G>h&%-#{N$@zFfD${RIx`1wi8G%p(xc3wR9XajB
z%ZpRss!=vieK|Cjt56EZ#mCo~_9OlkB#(4PGEuX}#@!R2oq-%3($_>=RuN6Ng|-cN
zTqkGe(SHw*|9S7Z)pQEOF2ZNq?@P#~Z@iJqJ|j4oa-JFf`x}V^uLaGGj8Lft_1c?4)d>@96pt%@6w1v#4
zH8_5fDU0e>!8U&P?%h<6opxX<`OH|*`qY4y1KyR9o?ZhuQRvhEb(T%OGIDAJ(4a=0
zF+j~8gLnG$^mA>k!*ml`nkvvLx9*V#{l|L+&x{(b77A4DRf-=!l{R=9v8-mfEXeVVj3HG(5*_0g@U2{{1`G
zuQqsIS{}p9M!+%<7bAyH)3AV8g$o8dB$z)pB~iAw6{*q8?ezq18*&^nVJ~fz#T@GD
z8#NmcT>q6*y8qej_g^_DEXv<_IPDI+pm{LNUPrVW@@+js!bmiudWnO*y@hg~Y$cI+
zE4#qkYpw-9AvA^yRKXeb;P!Q^)uU6jek_eG;GJ$0;5<
zy1eWz@ZWWO0O^GI$HT~W;1fBmCMsvVFB@gw+n{^vB&W-I>%cYx$DEDw#YYUtgbm5>
z2l?J-Yox=Y~I#2?TmULC465TJh`gaw8zN2d!At
zpR@s=mR!W?;Az>6BPmGWaA*{s-Zk7n;uS)HuGUrc}s38tJXiU85a^kbFH
zG^;rWIuo%X+AkPt#-eBNrLR}QGXgwl)?H7YZD%BI}-+qN}F~sFKXNkLc>vPJ}
zL;PWK7Ung)ft!kP13~iO_T9TFoE9;fa{%=)`VIr9C6l_30a;*#yP7jgkjbzA4`#CG
z!-Qh(I;C_4fp{vYZ7)won-4ap0MYb7=tEJAVC?SH`Q*(&de>yJD;_2rLH6)CoN@+T
z`WWH}23Q()Iv9Zx0>MFMkQtsH2s~BQ)ifEYUMKlxi{O}mAxzwU&<D%^5U==ta@amh86DyN>u}|drzBR
zs*bR=d=nCDt>^NFSaVo_2Zx@M3D1@)e&NQ9?!;whptMf$`ZOU|`;RWWS9_U-XK-I2
z9IyxoG>nJjNu&$HU?SXm<_PLV1~9;*Q)fs0=#iLTV^mO3&;-sv#6ioli?5s@HkOi-
zlGEnT<8Kit&<1(a!OqU0SuUQ<2=W6^HxqlR=gO>HTxx+guV2RwpJ-G!A`la60m%wR
zQ_rCtg*|2N2?X={)|NaH*yD+y51=(v#2X*24Y=ZNT-L*=54as>T$hro@8$?BM=Y3z
z)_+h@wAR-;t>(PuH))VwL^<>tYy4iP9H$=XnXvmI*@N#@Txe%HeA1EA;Rs;l%hJmN
zi8+ZmO=T?msmvdbni}9rm(VEMN5;)#lpW%Jd7uz(u7IqU4A=F^W~!}ZvP|6yWnHWP
zryQe@r2W)=(>`^|ezCzm$I^qB-G0&Ndj9eLyOPM@>hBeSxZ;@&snh;Tj89J178a^t
zUsG$M}Pj*uHQzfeZH9EFoTZf
z!25CVv)a|SzkqrMfm&5YCa5bfAc?_fe=N93LgCmK=4!^q#vmb)^Eu=-G$cK4@Hn=E
z`O7Mc(Y8|Xq{A>VWh)Md42wOZ^^*sbQXbT+CW^aqIKgDwy2`C_VbOc*a>Aj!N88d|CEw95(?+NbhB^Lu_z5V&nsjQ5du$aN7IcT5uU)XX_ZcAjqU>Y}=B0E9Rc9vd4QY#4hPs$i!AIvG^@
zmX}XJ`OAe8AwPfol`6~L*s+i}#Ds%`gABaKfP{t0yS*_PdwH_tn-kyT&@B~M}9S%5GH@zq=E?cw-0z)H_sd1wJV>{b%yr+FuLXAu{-#lk%_E}
zF?5UfqIT%QTiUHC;yvRpmWE2&hKKpVoqW+moXbo1YXX(;ynDIBW25tP;i9wR+R#6i
zDR$QYenW%AP#=`-o0S6C4ZhCL$&s(ae5$E79uG
z(k(QoweLi>H$WFUqXjclEf7G>m*=JN@#CO4fV&`FGP)dp#e?Eeo`={-kp{e0s@$+W
z;>xv~mS9GOqylw(VSYX<8{5wQe)MBw3k!aD>BY`2iA{q@2yHj?buh*ND3Qc&ayNc_
zesM7-Hul!_>)Op;f{;f3JhXF@oL;9EgyHG`M%s7BWBLF4U$mrQl!`L5k|JeqS=nSo
zh|GvGGBQdk4ao{+WUnHG%%(E3S7fB@y`#wQdG|Tra~{8Q&f`4JdOg?6?$lHkQ&IMO1~@%Ikh4%f{9aiR6%!j89JI2u1mmNwv5WXp;UgaW2?o8s
zs+ZYyEqJ#9#W&xoS@yDw@CMXm#2hx8H1wJT`7`=b_vEpX^dp7i2?x%M%es;3NYcp<`
zu-~&Yj$-gA4U6C5Be59cK=#J6ENAu$4PF^t*3V_(a*d@-&Tp=Q#(}>(i9_nSq@WV%
zdwtQ?-$590Du&V?Ienh^)1>#$6SPXjijQ;Lu!ksNktdE8wttxEYqjLVwJdcI6k$fAJpK~pTB+oc)OtC
zq_BNnZQza~|MBSUCmGkw^&6-j79?cE_vCLl(e<{a?(-fZwRlCtnLfA6sohmrnlpHV
z%!iymhukjCC|?c!T4}d6LiFwytoV1S7YP0~Ki}}WjE{s?$DdVhNIMO1FNB3_UsihW
zK9p8oZo*DmEfL|3m7poL60ln4Jg9=>AJVv=xq9c%|3G1uyncoOF*3n=<&EcufA5nW
z-2blsQnqP1&sy*=Y)s-}VlZfLAQ%|
z3Xj=al@A{5Bwpm^hV;FtJzZH{eTa?iJJj`%vPxc=*mqOT6Y*+Dx#COPN^IP;NmN{%
zS=2e8+LHVY)I5<#jN4ZnTkH3Kxi>kvGtT5`dwy0$MO^J5&y!#wlY0f*y$bt24{J0(
z9TPOsn_3cIp$VwFsQG?M*~aPDl!l~!HS5qE4j-*8;;JgVebKB#v-HB#WH{SA8ty0-
z6+N{b8@<4wRo5Fs5M6sxm-hE{PPggimx-qrrDR662()(n{r$Q;
zi{MNRW=t>959e7JtVoF~-Ye ~p`DN$%Eu#N2-*=6|~FE+idse2*}B%ROX-Jmlc=tV89ODl8vHY0G)
z&(F`JW)y}%-?XTvx|)V!WApkOW#9eInc*qQ|J=z?n2M+=lH>C)ySs0>78**cAjYfi
z<(8xvL|*xkk4{#qkhzw{U$Lxe!+-#p^b|RWu?FJnN>F<4#nrV|qGnxsB)^
z6H^ddCC@0J_b4X2#@4X^klfd<1BCo!1G6QK`wSt|r!GXNs;OyK=8-!vj+m)j-!-qf
z!uFMsYw0{j=Lqr6;r(wD>dz^+g5(W>=wgPt#W!s
z8>pbaxw&&GKlX5%%7gn|wAT%jIU>f2A9s<eHdm8b3
zmD^>b1Tpi{!#BKIlo=E}z!aXIJ@eYhF{C{clzj=DmX<(80K|odTUuG62uH4f@?5YZ
zpHawpBUp-Jf3u{ioVn{-5lo|7!O9jwL4<~dajXY%`t>#kyD#`dHVCay*y#qp)Wh#O
zJELQc>gr&N-9((cy)@JJt3FM|2A)^kLbc_SliX5i52j@z(PRpX6u?Z
zyL((Yl|DYfaKe@v@(TX&<)T84U)L)WWxlpG*pT<{HrP|KuFw*(Z_|7e)4{+_MuNqo
ztFm<8fNu3c+6~0vqAzo?Ht*exHryv
zoX81D;ZC~_FzV9F-!@bmIG>G&Z5!P_K{3ezD^2R@0-&T&*BpGau
zTl@Yw>7Rf4#R?-KW8;3A8xSQr#Zff#`CzW>BE)EW8!C~xzq}18S
z%`?;W&S(CQ!OVQG??8wFyAo9P!oV{CeN81UzCu!vP2@m+N&QjeVhIZup~YVzOA%c2
z{N()Bb(D7O+}<1JQ_p`2EC*2z7Dy^Kx)g4?RyeRX+aPqW7^x^{oKS0OvbNc)suFNy
zk~lXqzLj{qSvOZ^tj=_!Y?%jxvLA;>!?1+Sa5%O1^|y_(HJe9i6^v^g+V?+u#&+tI
zl3l^0jNFQQ4pOjKzgbtIm-bzSreqJV;W=w-L_z$-L-g|){ibW%I^7=ydGzn|G%j8K
zTC5al%`L*`6HK!?x^sgJY-YDZy+ci~etX;Nad+qi8-fLHwCp6#ACMq>qZL^b=H<6v
zmYVJR&4X;AVlrp@=aW`n6bA6oE_dzJ5mJtPDOj+_t2{*Is&DwV$C)_7m~>Wb5uZRH
zak1*}@s)ryiMRfnnmI`Nr(?%hnK>UWK|itWrrezRcI?pQwrG_U>V=dvj#m%Q0}#35Gp-TW`LDZwJ{{S1Jrjjw_Tj{iG=GeK@60f=`k903igBsZb8Q3F`VVPIdBY
z$Um6#D0}rv#GmD;Sp@*ZZah=%?NGo}`68wWDyYz6Ih?OQs&&jyJsA7W0;9!v3z4Vv7JiAYYv^Hg&KxrazAyd8d^)}7{!F)J1Zz?;3DsS
zX-A5dLb%n$8yb5%Q1Ax-lyfU-&`o&32^Vw?FZt}Oiqi#JM9%y)g=HAJ`N=s?cn{>~f$sIEz
z2(~@9`(`GZ!V1G3G9#aiwjRTS^k6Y_B|!N@4bMj6*zK>ouYB@dE8QI%msSZ`yZPC(
z-M^<__YJP~)!aRF)izm`BR)SwsMhDD7B8j7FdvwDD|Kw+WWeCfRXB=TS7P{UFLtFib)$c!B|nCTD1BYYG`eodIYqfH#jairOVqW3y1B0#UFRi}i(U8+vB<_qUi#8%a64B+w}I^dK+|;BO5`
zWwf_1#c>Kv6pR>isDJVepFx^QN<*FOX2+MW<$LYDJi{8RQ3UFxs7RQfKRqk!Hw4e;
zA^fAg1LgNGyo7(>c4s4XZ4(3SJYndEeIa_pvr
zLuMAW#8Onv36r3VmBF&7#XCL(yuCzKy_WM?X_IEo$GOq%8nQR9)$`9dx14C4J|94K
z`qPW~8itceMv2V)%tQ7J+MWeL@>_z%z9?wuxsPumel)$WJmA>VRGTPd?V`(;^Jr?d
zPI0cERnq2nVZiN$do-FkpXh_;HdD^nE?VsDcvGwX>0=@X=bg*q*Tp`xof~K>+qToYUJx)k|_rB7-e?$3o@`xjg4pJSTKGxNL`L1Z*98ng~FO*$Tc*CNG>5*VZ
z#IZ3W&u7xkKU^4i9@KxeNVIthNa^YqC-oF-k6?XSF-YYjkgQ{Fq+}&A}Qg%+LQZ>jCf)&wBZ`#K60E
zDMOT#t3t~~^nRggsp>Sf!}j;UrUNcr*vr)`&sPYXH9D!J$H=
zrRj1Xe$?s*R>m~*Xm5#Rp>WiT4XnLtdXQSBLaOtsh)w$^i;8O|XRDv5{5%@S_Nu^D
zj$Pk-rNaoi0MR53wZJT+(aEz9{Pmn&DLjJr^SmPOOycGIuqjCIfK(();M`Ej&bRXi
z-|gMXVJ00UF){!9lXn73u<4mLo~z+>+Ys$VHGYNW>|5}Ryc?L%VK>3%iSHJvtLr7oSk+e(b5
z-IW?uUkZ*@_1v9!wqs1R22jA_(h@Lnw_%>u%WMmLmHn4WyzlVK`TPtPD?Px-h%~YE
zD#_AyXN#lpF*mj@o@XSN65Z00wU>b*Gbg9dN`O+NZ`ar-+Y^r$l34}xk@|$J!brvh
zche;=ByghrgFrY+Z3x6H;7C=Ck!4D;41`BIdt7l-3bKhj03-0s?Ce%)m$!$1oN}*?
zYKXOzx|+Bu;w|>0y4SJK_Nj6}uV?2tM?hklxqDPFrF_hTl7o>)_NaH{3QQMwW=$sh
zk9E--vz1PM{B9oBOW$=8wbnoXyhRhUsJo8sCSKo9>$H41GtrLr$qU+3Zd`EB!jt!N
z@@t2QahYlG?!_*X94-p~hGBPuEU2Fk3HmIk-fa{gGs@BN>ZtCpXxV+;rbBfxPwW(R
zz|*hAn`BhOntyCst56zYbbD@aXdNO(84CHHD<@B%JRR2j&j)BNFij~d(*VXYRs_-h
zYUfQ;*?#Lhizy885ss
zUryLqLjjE(1#3X#Yx$B3|S3MLnm%2+$`n}ez
z^V)bZ1#7Z~2(#2!J9cPJw7i^=K<*Y9i>T-dj-Ix*He@benBpM4y{Su7Fq>zpS~LP?7#SFNPFDZ{LVn`%
zvPZ!JCFx_XZS9r5I{;PDC{iM;LbSJXhSAf|WS4PqpUjrZ~#L0lkLPMSUy>I+W3lKVEAxXL-y2`3Oj50ab
zVGns69Su6=CJV*s(=D(@rI(UE4E2~a>782I+f85EYAY*K$Wo&+6cZC0oBDg}eo@6t
zZ#^^sWjL(RWY=)KBSYj&Pme7AuGaeX6>gnCLtbRavk>sy+Sz`)_=)tUI`pLHH!&e0
z*Rq;+?ncRPe_WRY-%V&2(@~s)k#_q@{`YnSHLuvhpak;*@@T-R+S?z-K1$ysN0Gr8
zp~8p=cl6|dU(uCg36xyLEll^*`TnN~DQjTA5PvW>I=ZsFj2Kb85aR5z)<1a6Zq`{J
z-*+D*H~a)gBheCl0a1xm>L!e7;}%Z_a6RhI!(~1qM=3Q3Ye0zx?B)on8M;-+(_y
z0_4n{o#!C8haMaR1#%d8(66|;I6S7e-`*l
zz~3rRe*l(JON57#U*?`bd9SPD#lIZ7?!cK@^TnZNn4L5#=C-KzyJmLRK~5=I?K1
zq@2X>R>5k6jAUmVpSgz5rYcwhsJ%?8hmrub;v0Zw*Hv|*mdlC7J#6pjfEFUF^dN4d
zsX^Y~ORIE$pE*(u9`&*Wn2O^uI;eO*FgV!2T?r3edA4bl(pLYfswxT!3M~WD+av)S
zx3;w8o|`i#Jx%$0hnSz|=%Td9}I&Kb)P>ExTZo{gYcNM;uSq1iW<(G&30guT>$>_3H*17}Nf4(q8>H#}^Om+(N)R*xE|Jpu>(a3j2jM0E~jp#uXyqUIynW
z{c9v?eQny0&@wVA!6K_yC*Jh*_`rf@nQ4SUAG*5i<%SRrAV;~n0d1!wXA&Wkq63d&
za^jJb>$?a1P9!VUkhYJ?@`u5p@6W-SaA9n_j-)>e0H5jsS^_mgiam5VYU&u@ke3
zk>=hpfHVa*X-HgDyC`qIL%dXV**R>hlvnOZyq(e8wzkz%t$O_$9u2vgv8A=ebU*H+
zM~CKD0O?NWJ^J+HShO0!Pnwe2M5DkU^M&!U9hW|B;YXE23e?=fB#NPP6mAj
z2vK$M$!*e3*ugxMv=%daj9IRm{7)Uly
zu>FFbgXN{1_*4VW6XB_V>jDaeIAJGK_wk7C{VKk1j1@>Xi+u||R42|8WMdwh-@faP
zTtd{7Cw|z1FlmH%7#cc<_|__?JRF5{@3A{#zpXyxi?4$Xl>(pRS?%Fz0d}e?IOM9F
zNW)x6kS;QS5N=fx6D7j(w*sg%$5Sck^Xp|<+1a7VoY=k*?&zWFrY3FQT`(ifF_a@;
zG3wa)foDbnKf|=RC!AAA-&2k@DtO7by1Kfzw|8LUGVQ?q!Hu#nIvt3rh6b)->%5mQ
z&mup6m=ihE8fmJuHIV*cj{^;9AV98CaZ>iz211$XoY&mIDHSLmm4cJ^Vto+Abs3q8
zFI!3PzwRuuZu@%i+uIGJho>#as(;K9*H{A8p9qbPj~Bgsx&461C4^-`dXc1)5&IO8
z?%Z2)oCWZ<0%=K4=f;H+kfgg;3HZ_!-owK^he%)g*KmTenX0<7@-kenI9po3uqN}M
zbAXraZrS^MKJ@{hK8US&NdHt}O%FlO&}b%e?L3LUyF1`V(%K^gbsx56yA39mGKush
z(_g>){XZ0BD4j@8Kdak2@7<)AQ)KM=f35BQS8tWP$YXHj$oikJCqLTT*;cet2u`Z+
z&7VHo;P){1&%^I}IXMzuR`Qo0-^i^b&k^44x-h_Z*4X$-e7yAekN4~KP$~oY20ILK
zQ5yhB(O5lySA$VkFT~uzO^sZOR0J`nq>OmNWsZOblo%2c5}cZGY6)5di@3NkrX;kq
z%yTT6wiy{4pSQ3;Me*uvHvEX#(q&}cU;+MOT5;>%PkO$i4TPm?=;qw}ME4sn%~&a=
zU%052eKg%hAUsotu0nZ+=SF~3#no@VBLeYuB}QG_PA;8r=O5%j6*@jJNece30U`zJ4<2B+79stT25fK!
z`Msz0^yp9n>PBuPr#sNu+du08>^mB{QF!FA98sm97C`x8W}zaK=6-bpZlg|uL1gfs
zGc!*kYQ^Tchpd$LO`mWLe}B#@OLy?in%3p`dC^Tb!X=(}`V3DrIAy!=7jB=6v&?i_
zIQe~WYEO20`YUK$L0$@;D3Kb$X3GrDY+C%en1D&pg9m>QBaNsDSA_RM9vTT8xWx5T
zhzMAF2<5u4%RspWDH0xhkpnx=2tu~R5cCKEnjSf%6zps@$wP-Sa&w!YFFASgAZ+L)
zxaF$aXx(UKuM94`?nv8U5*~?-B3_c|HHWQQD<&wqddwOlPq(?>RBs#3w=g^N??RTx
zp=N!T<|@MXn|1V7j^r5f_M%vYBw=8S?yA^w?ZFnkJ@*DDCnYVbs0rMYW%wEo?N))d
zk}Uz6NtF~GG8qE{;5rZG5|o|30kK5RkpXOQ%IRqGQ6{DcD=Oeu==`f2P#{lH_=NkrhP+BnmjAzTYbaocF@it1#Ez{K;SiP552~yc^%AqTGkc>kEDZYKB?hU9`enGC=
z)rG-9d+rJTgTRH|L=*$0tx)Q*Cy9!PJd9>OQ`>(DPn>~?$wjc}=Ra=8iXpx^7o(Ma
za>6bY5$?l5p`)1z3H`u@ISR1JL7)YlM`XS+JG8nQzY=Y+46SFC{5;%)^_o&_!$8ZHvBq&Nt$Db81
zRy=iL>vfrjSOQ^T=grNJAp$O9{bFtIvQyNjglO6UzOnwzz|X1Jn+o)zkjPvmaEN2(Ivro5?6Z
zapqT`*h3r-=)Evg5v*>AXK+z*43~K=@|^KN<%i>1)1Tg4-++}KrDlzM
ztqbmf=)2*Bg>VNy_(h!Q-i9nB-;p|?YDD3Mo^8VSdiVZ@7
z+Y63A7LNiPNSur4Q`__UhtVIOw*NPk!O_r4Fwbif-PH|Aj1oPc9mvi-elpD-Nt3l^
zt?oV#Zwn#6Iu@_fu{rqV&sQcfkF=tbJaB_}9Wpl)r+p5dx+B`Qj~MHUui-sKEiWs3
z9gJ79JO;HWD0q5!jEs$uM2FCu2-G4L7VzODl3K9yac+YE1(lQ?l-auKYzNr
zy0UMff%CX}6Jl?l)Ww0N#ovD3#b{w+K}vo!H&4jPVW!4UW2MIm+6+YZ;H-48wG}~b
zA#|-Ii!qMe#yah=&I49IH=|HUcFtjAmW-q;t@+ux@#KGq1k@kR9h4Y+xfbGLlc&2h
zWHs0dITRy#@AJgnPWCpZs#W^v&$3D{%t{iC9RvQCXfh_3vz0h^(YEA#
z^Ht*E47))FOAjlvVn_w!qr(Fmu0`|{(HvOfE8Z2Zp=m$^a&lhddSkC5Jn-RCA|ub<
zD?-Gtz5PMyX#{+rpCG@Mn8@bO!oK&};=)3@E{|tv8cH$veh3!$_OXNDf1f`6hAJL(
zHh6~lnHd%*&DO?7G>iNK0{$9e?s|{4xVP`v0m*;q_fl0eGe7ie((9}FILfiMq8~nF
zF%duah9J24_zd_Te}IW%a$wS9dn;16l__RS6z?r3jqz1E%7ekSMdGuEoa=i`0i
z+00wY_&3M5NlULR3C;Q0?D5@9+|^udJ-<=t^#)>O;!*v@|De!$4;@QlAiMS?+N9{>
z?x)W)Qj#(*Iyp2&ve5o#4qc>**UrX9DM732>(`WRq@YaH^H4fH3J-Tc7IR67(VsDm
z7!evq#&Q&B;7?CUbFrVqnPqJJ6~R5;l$6j@{yX8OM{BxiA!)Qt3L!^x=-$^MYzBHAc
zSXPYcllLDFuWI^Y%_(@7_Pe3C}rpZaK*VkK5u%t7!@`&o&YJaO`lUz+Ojfn#U1MefTaNDQC827
zHXpx?F;>Gq+}yC1;4nhII+l-!`&5i&Kmn3aq@+y$D7sDY91;^_AlP#v*eLuJld7rO
z&Y(mOq~Yrhtv$z2yg}jdmckTuG)
zkv*up!u(n-g|>wEkt*|tnPuJHCsHR8LXJjw*DH4M{%n%V1eN(>ATe6t!wwuIF5`=!<6`T
z&f)Fet+a6iv@+e?YQBcX&GkQC&Wo_gsVyBZN??1Q`j87gsIX5NYrn~oO6fWeI&=?h
zXG(H-IOT9a+>CusO51L7;z>h!l+SSO*6A_ZE627A**3Dw>4==--4ieFj#vmFge3CY
zzF>eNdf@0ujC`WK6Cj5G@S_BF`Frw-qy)amNDz@I$Zp-*Oq|Q_3<_#s8f&Pk!c+<<
zHT+Qv0ItEvDzp#`%*>dVdKE)WoUbGg4GobZUc_#Ow&)SaPy@{*z|S9uh}AoHwo*~`
z71-~M&d0Hg*zb6?WJ^qR075>O%S2RFSEH#Is0*jvzWF_3kjuzqG`N?48(p!p8^w0U
zF*<~(S-cI9Xv?FQvaa05MX0vSkME0;t%$oTVTg9r14JCnec6XObH!>#Zu=
zkI{=R(z&U9q%anSXi!QqYVz2Swn=Bk=kyovww%&i`RLU;R;c*kn(*f?1&xRV$Byw$
z*S6Nb@ARIcx)W*q>ht)@ownx--$eIUi>vh1O`fk|Vp0zliD&%zQO4=1jFqb7lZM6I
z%lXF(L9V-U@R`2~%F3C18kKk3s?N>oeu+NQcyRY}%&ogg^Mz3(^xSmv^-My|DG%HZ
z%$@um!FlCtO84s_D*E;7eV>m;?!H%v1p|2mTMomvjsA$7$3}WrKJotj?K}}B0B08#
z7SMWtuRxclr1S}f4RF{3(dbB#Uewp!ePGuvm)}bl5ypz>K1_hQcySUYh?L~yM%xn%
zK_4vzP+yUt@~LiQs8JBG>)}WSN{V1bXy?F}6#xS#;NF562jI7k^~=24Z0s#0rw(3O
z?N?h>MSVvewITw7uvvroCe;zp!`^A>?X3XT2Z|DM9!_?4{(bi$A5w0RrX-LiNJ&Al
z4|NMswRIB#NeIx%dqF{N?(UIgV@$EkPM(i4CT-(J&6hsu4h-*6Hn@_a}9yI
z;{k?pTM0*^y?xd{2D12gGh6`RG+)1dT}Mq)(BRdA~iydRlzK~MnB1)-W-NgD$(lPxT<+8w_
zh1}Uj!^(b_t^V9j+2hnR?0ViIT62pnRqv%<`+eR^IX4oa_k5vuleWCxVH5T}wPw~R
zvw>7M=pWhRQhZkEiHVMmoQ6hnARspYMZVSo(F3<(w6Q}mMt=P%$Y9dP)W>0Y_NYP0
zi_}!bZlBvJr95UXE-t2~y*Nlqo&nQA5)0OgNjsYTa$xHy?Cu7zQOL1SS2=9z)T_>B
zDZkH{NO3e0$8*h(>Np`2Cc(s(p{j`>(vOFJ6>rnz|zdgz@ycIaI+WC~83i;O$m+Xk{X=zwTm>1WLc=-9_bMwDMu
z$joZNFh=_9;6aeoLi!3dtrhnf=knZoN&9|dR%_a9Q^}Q&kE-7Hv>jy<8jVfZWp?E+
zDN7fq;5X2HDuM3BGUK{xS=l?AlIPk*(+9k)Vh>3BmOidG!h;zj*w8s8n@cWMMT-R%UHvlJ<45w0wz_JJZ`Z8NoNMAuB&QO~ofed|+_Y
z`EO^LLwob#V{%RiMM|eDppHEK;uu+g4Esd3{$K9({jz=E!G#AU?OEX+iD48Wh`Yqx
zA=1PYH}{Pst|U$KN7C-6uA-%lS@-{PtT_r?|L|`VTfUG$xq$zcL;YXjZ2uD-o1r*z4JEKT}s3tjKLUotaHZgyZQ+q>QR)=ZzmIqU5ULL#$!!&}Td3;;AQ8S*U}&U#
z%Qn})d}EGincvI4=BDr?E0=f(8QZ)XeHvQYIk?b@*PfTMHsFlIhVlI)p^*V7CQz
zq1*n?C#vUXUfXMTVGGeXdOI37@neXDKBW7-8DCuAbMe5Jy_-wAURYS{Er53yoM%q~iQYkQpBigeb&PFl~k=ox!V!6%PgVm<=vdgKgc53HQd1>otn$wi`_@
z2KJD}T`}P;2KS8+EN(jQ2{1LE9&EaXFOQbQrgLx%Sowgn
zCEZS1)12vs1E0TrR}}d}6LWT(^pOpen;!kyY@kzW7>XeYVB9bYk%l3=1DuzOs6ZT|PeDP8h@DV_9iU6ag4^-^I#NH+D0{e>|b%xo?dd74*)
zFvFLd`u@zgs2*ok{axN8NAS_iVErh*>pc=zX>m&B)!CyYT_kt@UqpPYh;$1s=Ml^{
z9)@IRG%e?w6mZ%ay-eqOi#c4CmIC6DU%z~TdFj}`2Cfk&2ZxO$edCgq+ux5PJmav*^CL{{LbT!Af6#CC&q}|!u2b)OwpV4$=v;#!
zciQ{XA8*d|7P#M}-gp^gKx?b6Gfz5bPaIvK(;pUec6Q=Ojoco2^h_S6wo3Z0?jryc
zI`&np!Mty4Z~wDk*Wb^vcgD*{I$Vd6>xV4+EiQHbLE{&~H&TVty(o|B&$I{__Exf3
zJ7K)28OC-&HVJMHI5iyY4|P-0()y5r1-KP>3IQuhG1BR96e3SDudHkpHsvQk^e%{9
zrw%0lcI;nTfWL&3f>-7|NGOug{^(JSAV#^Ikf9+Q&)qA632Y&O8fjlUtLu5&8XM0c
zbv;4LRBuaD)-HHIP%EGW$@qtGl@@_=L355$iyrGDHcWf@DyKJTwX)iItp9H`AOGL`
z)qluG{_9(XwbxSoig)4fpz0E7P(8uT|Au&W-~7c9Tla$c^_UyNKG!NY5m+Un6(&mA
zUX=8M4dfb&x5!!@f3xg9=_(YXel7Zaf0A8!-y^^0s8}%7>aR-Tpt9&c+hfzmUv7}a2k?d*3CWbDST#Ais5f1Ir+VG}e(ZkKex|>qG0~ph$
z&6@$W7U4ktqy^V765GL(4Rjzh04$Vxyp+jo{iU0$tm{Q$dF<-5oaOf(>c&b>e|Nmi
z+^d(VqPNWUHfuZWujQ%V22`ytg-z;Iqdcc-hjv;At1Ff+uC~n2&Sqq0B07F@k@GJ^
zn*7WA#MH5kJF(_dcW&Cn6QyH*icn*uRQ)7dWgR6-E+w{8yVK-EQ@ek$@407G1oeRy
zg#l+VcJw=HGaJQQm`eI>civOXvb^!97z_GqR~Lo9$FGG9HM$^0anKmVGkBoQ%wB)e
zLLmsnKTu9WOI!Ogg7&~gU$EuA?6vxDY(;w3!L(+IAF;O{IPZJr5RrM{6*CWKpuw)>
z0lT^>aT`Bhm)IM*Ru6x&pY};&rsezjMdfXLFGkbaoiMVtPJ6u*2WWzU#^TDzq!Rz9
z=}xrBon2IE&LE{&OXzBS#E8roD_5h(uy$Z14JzX*Li5Ev8{V~
z>c-?<_IR_)KSQ=?+lc~T9Lha%UB%TOqggn=&D>kJl(v#{6jb}^n=te7Bt`8tH*!%VW@l@#(~_ZEL$;tnawfh_d3o$=cTH%4{D^m
zJOSr^v?!Zz9CI7fUAZcw$n(6hr~6OK>^e>Bx
zy10^USzH^+qAWD#Z>7AJH9t}|xP{>F+VNGu>TXJ%diC^hqanMA(1xw=-#<{MuYHHZl;9TPb>Tr-}WgrfabV&xK`pZ4cwFmBoe-rXZrVAt`H~;iT;_eI$NnUyjuqedME-{R_gmGRiSw7bY>I=~uI>B5
zKhWSWf8Hds@Yc|%Qq%8UN9o5#o}5q^T8f{!of!-o%GhL+ekamFO`@YE|E%?^+mjRfZQgZ-T@qlrQnrI=e0zK3K;RnX6kWxOs060%F8=Rc=S&$r
zxfZ|e!y$g(m;Y8C6<@xD`UqVrFbOnB$YMZE0p`E1rNsqZ6QC96%a;*--rn2{MO6nv
zu&c}08!*ea&y*5xcwR?~|Ije2ef_$|J(cp{0cDC~jZI&Lq$mjgGA=m2gL10Ia#bEY
z*H(&Adj^bR3jxL>MR%{mE4rA~Sr+4i^XB?T71Y}XEZT3S`d*p3Y!!{C{q4N09utB4
zn?hKH)&?#-PJET8MzHxVYP#yz-tOPxv^R}IcSy~dKH=PdLy}lc(@kjvOG^?61W9Qb
zRpW%LDE$dPj><;Ww3_inwWEO_@`B67quy2qU0OXTQoP64X!CAL-nAXs$u+otPAfN(
zL}S!Fgf_IMuzT33b?j2G(;D!Z+=*oEX^h;O>K=fs67&iQ(ZFy(LMub}$dTs=V<*n#
z8iEr*CJ*`bbC{)u8XEU>An1v6CUFkZC7>1{uT2XbqER|QlV<4#ij6dI+I5Si3KE4;
zKPn_>VdJQt)%IuE$tWiOz|G3q8W|>tO(wsFg$7gZ1qQIZX9Q
z&P|{dxKFc%S005wTZny2H+Pa5V=ROAPR~De_4~R-&$vT5vZN80#c==Qsi*}e;vtNi
zRLzWYsWhXu^fn05FE1;rx_F42
z8+X!DPgQjrv2UNN-?|eQwjrapYqW}WL5Vh+HBn>qbkZpt2E)NF44I)6D?xhXxm7hX
zVpLZj^;Tzu9!x5fXW3(SD#a^1wUE&2J)5R{JLOc)=T9GZ5Mxo4l1{A$#Sgj=wu5;b
z90$m#2#F(J8n04*xA$lUTUuC<6DJtdJ7q8ba}A?|6>Z-09lmP#v6|dFrr&&%r6igA
zeoyt@{ad7a{eSpfFW9PffLzhbgs)xvZy#bCqN9!_4!H!^3KWss3144d;$d7|c2ZJ4
zG(Q9|atL@r_s^?ONK3dj
zId6RN;(@1HyK5MXzkplp@v@a`yV(|&P(gPk;N7R51}V$*EfN{k-?kFkW;VAc_J|hK
z64V=MP)>nkL}Nj{Q50zm?$!|9Mb>h-$cpMELy4uAr4QXF27?kK&PKnF4h{@>T$O%e
z{{|dOwPl>hWeXICmZyx29^byb#q-c3q%f;y3nLcB%*^ap)77~WY0rD)5h`f60Z{3A
z$DggL$ana#t!3!yzx;cxhlO;$oDYz-ULvT?{=7>r7zmqy5M!qBN_$gleLCKfUu2Tt
z|M+$nvE400B4dLrbwHp;|DPQpt&<1f;ESGQ{_l7bh0sv{)C(?|xk^s++7Ew?o`U~}
zU>S}j>EK%xtX6Dwg+BYva>SFf@*DDnw-&sKv)9tN?f&U{zSUlJv1Kj3h;2S1n_LUb
zhbUgGiRWK%*g%|{jpO@ayVIK$1-0y(pxY0%cTO@k_L8w3dr?df2P!TIX#pJ;m|O*A
z2`@BIq)h}MNYrnDt2nYqkjS@hjhH>7qtk4n2kr)Fq`7%N#uTout!3z5K%IgglHKsb
zr%xCfb1J9A*k=|mTP^5prIxxoPiKZYQEZEb{%&)fBTN-zZ}?ia9BMyNlU
zO*rF$7=Td4KL)cP>1D7+LAvm;a|R4&Ro2qo!2e=`+hK5OBh|Lskh@$SjE*F2bJvxl
z`j!KT#O&Dq3)WvvBDA_E?-lc=bK0-k=oJZa>J5i+Xs6ISd2Ze);z@SxM(!pvc4d!H
zRpN3I|BCs=Q{urb^*;hBj&(M4-@5zqv0R)ML8vjMElqE^Kc#OO{NnPH;~w|(WJ~fd
zPk4tIhU=fEARMHAS}eu#ag}8yW$YlJmqokIm0w-&wfgtY!CQur^`bo_<&mk-f2JtP
zDO_iS93C%vUwI#{vpl59oqc0zXJD1TxaXybovV9xI50Up*=F{6Jn@wCR_~u&mIkt?
z#SQn}m?WN#$(=mxEwO|5z6|>nValoUq|=2N0$g3kM)W-0C+XjO0M#|A7gJOXwi
zsN(2_91Pv{ur-rP!{^T@LDt}qjMX6VW!Pxrm?+h#^&B?X0hdh89nP
zH`X3de+(CZbgHUD1fw}1x<)B=%F%hUzVlg>yx?D1YM&TOaJy*epeD_{pX}R9qb0`mZ9Ae^~?Jf(+>WM
ziu#Zx%jLN1xMLKxy!Pd2?QShwEo#|K{;DU>9~7_aQtgrEDN4OLRVXWW!&f@kPb;ln
z)U6~~sv_q3*kPBNyMv1n2PJ;fi~X)%U@+Y;^6OTat&@IDlej^))Mu9$d!#duIO!H_
zR3*RPowti#!exx=&-?vGPhOXJCzllei7cxPzOcKmyyCvGw%D7t-)h=JKk2#tMEE$K
z?KA(-xHgt|t9{7e*TTH>CGji6W12;LJEwPvH0Q5JCw@-k@zhQ!OI7fkX
zHY2Iv;%%N|b34A?`#@81vZ7?QCG1w$$)Z2=Ll1M;4-XU_--5{&QIU~cyD60(9Oz69
zKeWE!0d3vg`}bQ-E_B=B31$YQeP~Ol#l1M_)=cE&piH?mw)3*)2M<
zJgY#asz=98km32Q(pBk0>#ww8VskC8%39qiSDqLiX}Y7LHk)49{pqcVdxNi=Tfy0*
zui-0IV)n7L;PA@yS+E7viJ}^t(u6BV8M@YIMgZWU|OI6n%K|{zUk&6Y>swl4AGW>c64QmX~*7zj~48D0fdmGBn>m
zM^_V)%QU6e7P4|G7$m)1xik*_GFcH{Yi|__iJvuAG~5>Oo}Fy6h2G-UtM$U5*sop<
z`FSrKToBcg!cz5qUG(s=?6un%6bMFV;iVm;$Us3k5cIV{KEKh>b0Ctxd6tdjc-(&A
z;_eX3dU4smd;^sS&qSQPK_gpOPUSGpT
zzFPozm&QA$A^hr#NM)#**O&s{?hhfEx54Vktllk
z;-zKWv5I=ytmb_ZA8NuG#6F#En0#p3q}*b@1{bm`!$+mJ`=(qIG|mTl)-!l@xQHhk
zoS?|aPF+lS9fc+9deZ6f``MNR?(5eZ@_zynu+AU$e+3l#XfSb
zC~tm;oA_*=MZgc+p{O3$4`hJX$e@)Y5mP$d=6=FH?yunQ+^W?Km#tBx|w*adbQh`nj2@7Mo
z6N2MD!?c7l3;Nzw%r{!z&JdJ}FvtbU1l6Z~cn-uUXGczx-gRD
zY|TGOKf)=Qpci14`-43Ma7M*am;H^+J1&If1HphWj<;3}5fZ(vP&Ht#GMIzMrHJgq
zL=w$2XXxYK0^vp|U#bCx3Hcg=X&bdAIhZgn52SU;A>P`WLFXXFlkzYFm8}0@DAohh
zY+&$d4Q=&@hV!)Aconj|rVe(cC8wpqwLN}TvSbutvL`W<+0(PwF+BlRx-E$(7$^>=_+eVk+|Ca^r$zX}!|W7#JDj#{4R
zDmcu-@}}_!rj0>}iFVb?)6)k@Lb!NFc)-9M$;rX^wc+n{Wrzy!?EGNvks5^D0-`}q
zNl7VK{&4<7Gj}JDITEvbz91n6t^mx9sZAj%x8O_kTN^jKqs$WlV9%Gv9H5rB_bO6#
zzxu5rNCHKsbHCH=p^H$XOI`XVgJD2HiY|zZ1t()*>S+bK)cY2SO{tO#NfoOgVjlZ5_xABr&C>rPH1HQ2hGgi;{f(NL@g^>E#9q7_p#c
z{Xhv>T!$iSA%v~vSF^PpfeQ#;niG&Q<9l#&ac$YS5$Gm%2=p5m*nMe(!!1Xw1JY<_
z1jyA!@}W$+Vr0~Xm4Gmxm2X4fQ}PQ7+kL#Agrhk@-Hq>mUFJ=2Ff?j0XiJbigR&Qj
z@In9m(P~3uV_jHqcjVt$Wo2hW3M#a>mKGCoKO(vSRQan~Tu}UOVc%j5ba&0ov|#Zd
z*BMhjo2bh(KR5SlX_7SZ84zBUZYJhK&*TcMUtXrE5W>|#AeFlJqwJ}PoCk&dBuf0$
zsiq4BxPr_+3p6xqv%!86Kw-7CAYR8*Mh_jGP#E-1pWgL)e=HX@g{jYC8;~@phVb-3
z^DQJOh(WNmmU-Yfpt`WYX5T$?F=vXjuf^Fzi1
z(7Wb3(i}4?Dys6z0;7wu<`+5NdiHdi#oFIcW`BIbjyAHhy86!7$dCN@^|N{)jG_r*
z6n`|HDijCMAATSxQW_hz?#SzCY4N*`G$OqMk0y5QGlqr>7>Qo=le8l@SQgG|KeB>>
z8x|f%8=Frly88OqT7NGuQ)^@Dw0_p}%*^kc4~osXuFITT-9vM1`*HM4Xr5ZugNJ*N
zg94i$7aKmLqxT7DP}tbsK_!WV9Lz#=XW;?sl+<+TtrFlEv~@5`7B^rvX^t+>$}4%E
z2-=XT$8zOq4yr|E(pccp#?(g;#^N2|Pi4b+L&(9F5R(U&n1J{4B`Hlu$EiS?
zgA03d($l{{2%1oH!`~nC?Rz+LxfUW1T_g!=S3aY$1Vb&4^vz?~>M`c1vafGFNVvDr
zxLB75p&Lj)-bO)zv3alEr{m<=NJHV=?GX#vee&wR7Uex9L<9gChq&^68de_-$zx1|
zlsptM7+iKQP?x6-feXd5DnE(>(+djp611uYogZ0!)vRSm(9deSP@v4t#)cj6`+D#_
ztrVR@@A*q|Zz7}Y=g;BIi6HmW+Ag?7KAI~ER8G*sXwP3OE2C0V=ylK&tqMhAkb29L
z%(e@ds0Z`-@@~gh+U{XHel9vU8D5266>2A;wL7iHe~m|9M2Ihwki)+9vk@wEjEr?A
zzVPK?cOVU3diU-f>V__ZxR8+k@893;z~nINJl_4Zv=&ZI4S5dM)OVNxVDf7
z7)^oxEvuu%2Ym-(rbn^^YJyZgw$BX>IaygPjL(os!o}8OH-f|EOK0b7)0g@2t!q|g
zJ5US=2!y^~#IVA;7}4YW{4+V`aDe6k
zs{~UL(~XTmtV)|d*X1GkYzLI*A=SgV0Gaz7$t=F_gl%K|`l?OEJLvTr9t+;Pmt7+7
zzJvo`%g`|WDvu4$T^W{dLsMArdOUPAh&n6A&Za6|%oi=KzzW_1QGi!Z@d~qunZMgO
z^_=t^p-W~6!kN6rF+tj99nTed=<@Etph(hs7ZhB?A;AzdH0`%us>RJtQViaaZ_Y83
zJ~|9Z3MB22?qgc4?MOQq9ee{1$fuF2V{B-M`@ksWZHWCLKZP{Y%6IkEREyhb)V(l@
zI|Ix4@Zp3R?8$I7c
zd9(YF0*o4njk3bMLI(JyOZvW8M8h=+HbQ
zU*lSX%)ZMBj=#l|^YWab&%n_Xd)V>pPg0dgy=zxK>Ky>XU?9ePS72;HlWp`|pEE2&
zm#2pfckS5VWqd0fa}oztmN$Wn)hzBmr0aQWltzJ?F{_otpzEMg;H^}TSiXe0Uc
zDQ(&9@3G;{MPkuhwC4Ys<{BOr7WRKiyYhIb^FKT_TE8NFSqlbhRsew^**hW>EuyS^wc#ZMzlfw)5i0e~PCW_9ej
z0K!=Dkpi#x+`R-DVySN6t0I6^3zP$jtp>DRMj9bnrf9vt#yK~PyIzK1km@oS`e+tkl_EY-OIsfyhf(83+vpaO8zghWp
zpY%{LaGxiae*W)n_fwVIO*KQlvpT+VA;o`EOE<7v1tT9}XTYKaFSYw&4CDrpY)=u3
z13r-~!lpUm2D!Nr0YugAd?%__CNy57>=4Bg1Pkm^sTO2ZT9hHXy!Gu=SK&I_J{3
z>5&L2Awzu2vX~9!pwje}--3-yMONQaT7EBzXNal~OWsm80yW@(P6&XyA1>zB9OX0%
z1fSfQa=>yrkb0h3r4Xccx|dno>sP+JOCIr(m!7wzIbL(n#x_*)t!
zwT>u>Lx&7HUKm)=EiJuvP3zClVW;}_!Jy(+dlcIUB2bvj+o8t-17E`q0^`O5FF0E#
z5jTU^*4WOCt?wU#bFdUv&+nRZE*g-u{`#@pcm0${;n?ee-zoJJS=rH-FL}m7w2UF!
zi?@2F*A>s+R(U1*Aw^vmb9SM8EC8*+jFZT)u$MAKA_fMT=qaCoH~^{YK$Zw6WKuU_
z1|?t{0iaU01ifP8IyH%rX9CI{3(~A<66_0Wqr4TU6R?hrzqx!P`2RzzxJt>+b6MH8ZmYoF`PM0!<%S+6oS9uqY|1
z4ZdZF&J6m4aV-T!+TJgg0$y*D6SZZ=eJS=IN1WNsUEh~H<80^tRXVKco1Y#v5zXY4
zNisT*Eyft@hzl7MVFce*nwkxt0*#9P`M51hALXy#Ez?C-8*x|TS>8~GVL~hgN~?Kn
zLD%g=4-PX1m8UC}%aNR4vqegBHu}srR(t)etky+$71J^NoWZ?MTu;=w8R>ttm+un)
zd?%f2@Y9+juL2im%{NWjkrxG8CPbJ&s-G}yErEJsf-hp`3Dt4b?0uAvn#3e$0^S{;
z^exu>anN-(;f|xzb18cV-Z$2<1A6=LX&|Ir2`8j`%%iVQ=^ULfEC1whLWh6)0C{nP
zx-QP(FUFjh2pJusdC&trl#IK=%33wT8+y3JE9#L~-E1J1f8B@;K@t~<)4
zsoEcpBum>fx&Cq(sd*MG9wEX~3JSHc3!#UqflxB6TD&VYN_YYKy-~zPFReH%xvY-f
zcDuE)Q{W9p;|z1UO^~d>WsNWlC;mp2(D&Qh-}COs-eavIm$GqJYkPaLHN;$0YN)-c
zIW_H?BkV~UnZyQ;$Xx97_*dl~0#+yuDNpo_Im&P>Me>Zw7Z-THXf%!rgp3j~7oJUZF0KqZe|ctZWE|b-Xze|KRb$P#^)h(356V
z_XDbgU-&g2v|CyE-xN9w;;Zs-fdWzUIcnke>`|vYS0IbPi*cEUkal%^b
z%oYyMLIhLl#ux`
zn0lMsm`6NQzZ8$yqnk(_vD*R3PYb#o35b^H99+MO#G+4L7tR4#!fk;)3xHSGq_t~1uu1n
zSIV@R9!L9P##6Wx(}@ruTt^{3aw(tL4d*aokN_v)h_^31q+ua
zEJfk9SS2w!LUm~c4cFm`iCP3{=bEk~#X$aDPH*hTpF^HB9*UW#v!RQl_JxOULpPA=
z2fS9YlplsH&A{MbPtJ*C_tK#XA2OYeQ|){-Y#RcDOK-1ho#!&-+vsyo2g{OXeNp#B
zx=LEx802XHo#UavKeI8dwh|u}7E+UvIDhASpsQJ1y_OVn6J#98vTp9~tt~A+c1EZ_
zuZ{O}8{G$SY7M|I5D)GEB+$&**l4>p(q(g1O-&)*+`PPY{$_laPvnlji{ApZ*7+XnK_t?b
dp?}uJ7vmZ$Tq}>6Ubi8BZ)s{vD>FHI?tkwx_|yOZ
literal 44513
zcmce;1yq%7yDo~YND2ayGDSd2x{Koj@SLR#KbH^3;>nA58hKWv$j)sPYDItzjKtsFYjfQrq?dE0p
zn@z9GI{4$duA~?e?fl}uuhnUxXlVD*B#=)PonuxfoHX!6l33Es2UMvZtdy
zcssw-t?gm2?!o7v{xfNO$L>Y7AF;MxKh?7r>@3SldX*pUlJqJ*!M}=p`r+nH#J8Oq
zhOvcHz4i)*Zw%kc&}%nnyjJ6kI@yd2n`vY_BO~XSF+|`h(9m3UziwQ*_~A*Ak1PJg
z4?DMZX)pd<#M@`t2-1o8^!IagbktO3_L>}3KmGFMOKzTvj}14e2lfp#w6E8KGRT!^
z@&Ek!6A}{Q=jV6%&96VJ_-LOQ7#JEq-eYswn$6d1A|`sJa}^CO?tV4US2+M_k8xOad&0tg@Qs%Y-~7_=JLWq
z+C~CgE`NXVE^U0*RE5JzM>N-!%awM2PFa5={j$Vu@Zs(al3f0v-13W8ze4q4TO5-z#Wn
z0&3y>UHJFz9kxBLWqA4R6@0f$0{_=7eRK1Z0Tu1?^77NIw#SBshLa;#C>QAu5&x5vz?pv=~WGt_zH#6T6O-4pmT~)OsfPfWAu&>h6R#7R{
zZ~64&-Hkl;QY^M4G+99(oxyBXyZ!ZviApDRRn?`xSC6-J!14Zge`CR#Tiavz?g#nT
zX>Yf*hfk09SKJuj?nIUf$g921J*}-bKRuLm=8wbbyZj{l_aEntn&b7u?FBnKyIlbS
zIH{k#YS20yx%=zW7ilsBIx=X^7^%|H^Gi$RR#So=*s!Lqh}eOoxw$#m0C_b>bjr(@
z;&@517AVdS#N6H8U0lj*nqcb-1a>a>l$CM6c+qDR@1d-6>(VcqC}y3v85t%T8Zk9D
z{`D)fv-%Vi6ilU(TE^&TngMNdM+XN~G&C!2_u;Z<7PHAaIy%U>t-t0LyuoUj+oJF~
z$xlsvmLtHyp+pz9TtZ0C$+_2?A_@DY!glVtThdptD>ty(+uHi}VPAi3>KU;&G%!f1
zuCBf~KuYq(XJ>gpGRI>mSF_NlD~?6a2g@zr2aBvix8cM1xQ3aTnU~kOqv+G1)z#I^
z%*-7Di0nlhdrl!b0)J^~X@@XO`^C{>vyP1wX<6B>_I7@U<s~wsc^{ok{Zl-YPQ>u=@O;>uA07>gmW!*j;rJTgpk&yaH>pC59rE3T
zSYk~=!`I=WG5J0FJ0_~#R1hJ~Zf;qeTZbnfKVs}{w}jGoxVzi;uAH476F%00n>4ez
zt7Uf)$F2-gFC6UPmPmv=t47`R<>b1Dw7o2JbecQWwuKI7L*brl_D0Q=
zl$1>KFndX_p`pEw!`LCud|bE)k4%_1EIFQ(Ubc6`fHb;
z8o5fO^tQK4A#CmK_0@+>6fF|rg1hSz$k#e^wW^m_R?KvD6U{?f^m}@GN=ix~)$H!>
zcJXgx;QApGZxb@vrw+kBp0nk_RaIB#qVSo&NLV}8>0*_I^KI*pgWz9frKF_9FFu=F
zTe@RiF+BBOK9a4ZFtDq;E7knl7o2Nq#xm2>%bYgi;^X5BymcaigM(rJ%Q&pF!Abe}
zDzmd!Tf^ufUddMGe*OCO`*+>Kw!goBbP)2Nhs@m`nFv!i##U|&swz0{`%iD+b-`g9+
z8R)w8)^KZkJ5Qs$z_o|3i|ne-{=X$jc;MqXS;rNheL*Qf7BkEz_6Tpxvn
zhf7l3QBfJ{?NxxtD2@*vAiY-Y9~h`r>*;|+zW*nVZ?99Vl{^b1Jsfs@G%_|;Nbxz(
zu~+b-A`0NGQ@}l=p?L^j#1R_W(fj{z-C35u`1@{}{-ldPzsjax#0}b$c)5!$hW1)8
zf&72-0A)wyPr}pha#UFx@A}SY$8nh_lKaXlDt?ZO6ZAORnNQf-+Jf!%-rIX^th8%*
z_~1{FbilHlq@?6y?HUbDO#$m)-y!7e%16&aP{_oxva;-K1_*hHP->`!i03jgoE#iv
ztoq`Ffm%bRrlxgub*yY`qKJ@p??h~D^5f&D=<}+258GvDM&L#kO(*O;`@6q?Z*1=!
zZ!TJKqzsYdu(IA=9{Bq8YsQPT5+X(>CK*Xd`p1urh*EDq8xqVE>FDaxF3@i&w_E6p
z<=dYL67t;X5eo{v3<)I=jBB*e`8O;hC4^D*vf
zO)&<7Jk3hS<^Bw9uTuvlr2+5Tgq9N(4?|mcLtnUXlkt(jl})JcXNxg@?C7j2w;cRs
zqgm>|k$aU+?eRk>3bJwhhts|k#6(5zK9ED{ovP+ccIkM
zt(~3l^EXbH&Um5oRQPnaQD8O6EypdeX)VNk7`wuU7rme@U7$=i13N3gQ6L~>dX`_{_~
ze1~Us4h0zkFWAq<0r=)4;NTxR-Fg8ePmf#nW++wIZ|+WxXzaN=o6T?
zZPyutCYjEOmaV=|$CTq`y&`%bQa|#p39sj{awx^NJTl}@`%}F;w>Oyh_Os;Z1U(j_
zLf%Z@7CGtPBPSt&s?OuGr9aSER8;i-{nZ;cZeaPcnGG@+@cGueK+=FM92y$B+z|zN
z{no8pm*2eF{PjbxDbRrA(ESW*&_a7;y&)`~9=#AJ9X@(eVj?-e^RLxaevc!2Ha0fI
z9X}+EOiTo~jc&Z4M^rp*ae~^*mr@bTPwzR{+piAg#b;$@aV}`6*5k@cOLxTZ*eP!K
z7nhbU{Bed3!)v2@pM60>R(7h)CP|#?!^e-aw&fa3DhvdL39nzHAu6ED(9$Bu4Imd#
zyN|QD`Cd^;>7wkWr4_mv?Web=QiVOYnyQKY^hwvqXkle#<@fJI431h;Q{S15jj7y1
z2>xz7`})J4nAXG;BQI6Yv(lo1W28L;<59}F$En$PhX1~$C5qQQiFReXJ61@W?@^zn$N~
z6=~-#H9|GFzBe+cGZTExX4F|$UY;N7oekYjcvx8V;e7P`{5+r2y5a;k3kwTGRdVud
z6NMMQ=Lz@N!NnFIDKKdL{76enD@!F$&~=A4+Ht8jMJ$NC+Wl~AFh^ZBmUkComQE$q
z|HB7pv-rfsnwI*~;)T3C;rA}Td5!Ar>)YR$q97m`{9SG;sHlWd|M~OhgQFu;mw8qA
zczb*M6Fycp+uz#y;OA#j$XCz2`Z*bLe|fnJteoTOaK6n0cW!|rduvujEW)03NzGM*Zq&@yho%SInjBC;iHxbB|;*3M3a8t*@WaK0hLi
z?-Fv^qQVUz<*}WpaM)hxvhqxf!d8+tH_xuDtX%$>VFA?>QSskL>^YCBE`b;T_H%XeefNJikGo1U$g`zz=vp}#KSICXIPn3v4
z3kQ*0+&p7j4jlr$ayGO-_Y=9M++R4j&8b!S`%79_Fdn-=v+}rV^D?f1p&=_qlEybE
zT62G#SJ&d8WANOcaPp&l^a#ocbQ_NzJkaZk6L8yG^$QBx-`UZ(l4WOR-rU}XW~bjU
zJyV}m_0_9iAMVl@@mnJxKqBAIb`y&d$v_K!sUcG#)U{)2c=-PFB0A8NPsi@e@2N
z+pPoWmhq^iUd7k+uH7;TeaTQ`JJ$-U7RCtI_4Xx$p^>_Jr77d`=HD{uS+Q`MWSWvZ
zsuCCCCq=2^sT6R?jd>s8>0#jOq^YSXjm)GkL>57a!K3sX$W(&(({}&kU0m$29SC`d
zCxP33vD<7Yx8dhcK%lS0)O7jK3~
zp}3!ZGTwsMo4T`OA1P&MY@BYcW0lLuz(5tCl2eWqj|nZzdYW5jkYhv$<}+hoi*kA+D)eN(_8b!v^I~VFNkt|e#RL;G
zg`4T?&;I)LUGQ31XlN3#QjKMXrij6{Yu7Ra#Rjw>S4~b%nr`42q|sQIoAU-+j~1CY
zIy(AdQ!vree*5-q5%QXUw8Pf>B8db!9C=sQih_pL)(@!RyDp4*#93@cfN8%*KNWb6
z7)dHvdM5t`$waoqAB
zZ3b;$EJg}a^7BJH94k3SMn;+f$=v=l5QcT7W!3n-K!nf=8R_ebr}V<@?JxB;egFRI
z{Oq*#-r(ryXj}MWGr`Sy5d&`@pF1I2Mkz!-k7#MTV=~Ku7r=cGHLd{_Q35F=Bco<0
zmmZ_V6n?y%l?NT@X^_xagn(PgvLP=&r3EpdL_hcVj?2gRnJO-6&{4Q!S~w*2Stz&-n#h_)sYGr1qHozb#JRLJrSWh
zGGm0C#Rphg)l%^MWLsHT`QgKdrnkqTgdi0mU3GVVJ!w7Tzw4F{hu($uDg5(iK<9-$
z5MYPL$8K(JW@V;~tuMeKL&Muq{3%zd^p7V?1}BqsUyq>SaV8*T;fPUnJXIEtDWQJjtq>WV=eDurIb?S
zXqhz1n4Q)&90y*~g^>%ob5m14V+yCCrrvU{0^;Q%71h;i*9Ha#IDkD^>Z5t|XbEyT
zz)sdkb4c`%S#@-D082sp3S|eX1P3c?6!c3Fw@?3mJUl!g0VpdEW-4Vtaz`~cHKkQm
z@e2x`Ee~Y1w6ttgZFj!DbOlAr#&$u~?CtHD7KSlmaq#n>Kn9_E{5a@T5>T%O(?vVQ
zP1@NQK+@OO2PYB>B+dAemYZt{nXM^^qN}qrFfcGTJ6ogNHf8x(%x@9Ex2LBk6u<`v
z%nVfX=qN889UVWvI}kRl@vaND00_qITeZ02@onNK>l7fxRAFs1*3P?E?7Meib{Z=yjW3AZL
z82Ug+=o}!jKLKO!{u9U-zB=~$ko)E3Xe7ZlS{*ey0G0H{A#*fwb(u?g-+aEo&*&g5_w!YbL&Iv|svavEN
zo@UNN@eqU#qn;W0bJ`jntuy~5K5tQNy={9x{dVz54!|z)GN~q=cN^
z7^l)Pdve`z6N3AmJ>7*b*TSv!1>btl$
z9F;Ybah#+e?2L_^~&@#GlMK}^q&B9brl$;4WqSU8=!P^
zI}X?OZ4*&F}Qu(Kt?s<8$g98Vc9kwG3}=7T43$8=$z{EvRH@I1ioe2{4(f
zP`q0{Jv8mf<)+lkIsCVyS_;r9HSbTL`_nh$h!cCEc+|%IoB~UX>)zi2wxr@nT0Bz9
z5pve&&-e*)Wk3|pg^KVd%*g@2CSctCb&pzBTDrKX$bBK!i8lr~nDnx;16VZ`4<*y^
zxZvEr-~XJ4dz7KyR1fMF^rdm!Himr9Vm{C|Qn
zK(qHdAf-!p-x+uc{^aK7mWtya0cLA|
z23qIWX*ly@m1+4`(o3&-9MI(O$k<-nWlif<8^uB5odtDLFWHK5Sxufm4y}d%0V<(G
z-N)Pe^l$;%)bpzR{67~osk(Yrn++W8`-|JV#``~~yJr*hJ06gbckCy%h%D_IEj5d=
z>s>vyBlM)-%zoa@+OwmAf#I+QP75@7k!_>1L&_88_IkjO^H
z31PYraboo*W+^isOw8!8>JEJ)qssE~N431Xysi2L$yBkEyzm_~SsMPX{&)T8CL%bI
zL_LUef<(xcCMG61yP7rbMt3f#%>CDJW&9r+YXZTvLNWt=eT@oxKhy7qBvJ`|D6V
ztDQC`Nvx_^;1^>tFUY$%LsHDP>@`+kkO#63eOp4eV=aXPmmX=vQzfKUU;<{?7l$`ukeNxKJqxg;e>rUApk|2iiLj5q!}XSnk0HeABa@}pg2B%L%#
z?xZu_8&hIqtIvUM`P3Mx`Bz5CmOby_;lVQM=;`&1Uf0d%=HhZ1FSCgffn7=?8+Wo<
zj}25Fs=01>wLN;p5YsXtJlEanq1+E<3(Lm;RW1`6HwWnupp1ls1aT+?(hgD}g&ea+
zqVQP;7kQa*muv5UR4db=)*XPZU&NaFDNP_B9;~ovYfXIb5(=Q$3
zq=T!utBcUTfy;872yn~jb<+e;hky
zi>URS$+87jj~%D2b;)-zOT!LoyXdMBjwLFZ52_2_HDV`Xr@nBeJGGI|3eE%u8>q@b
zEPc>Nv9PchePm*|d%C-SO;3k>{;aRBzw_e;1rCJ60jwE5dIM;#z?EZ>^S=ZQ7Jvlk
z7NrCQrKLi^Pg+}l^Y?H2!l(|Eu8tkd`I
z+3V9)DvOi6ZT^WOIOH<
3rK6J`z#nQjbyjtkCjerGqLbh?0{-`c5=8T@I98#2_!;b
z(`~t9dXmJt;Qzz^7xBS}1f2u++|m991qn&poUIDbGwz4Bj>1L7!QS2tZ?E0n_eq@(l{$COX6l2`_GgX2useCloY5(U%q@%
zONEYpaj)Xs)uS3Ui@w}o)?4=YNS=&mDyUt0v1_q&FEc$&goV9jMczN$Y>bjaIN@8a
z1G?yKt4Lm;5I`;`)OP;AlHX0`%y(&LxkzW4VlI3CQHmT1*wod*(P}N#QJDw2_%?1@=8i!AmG`WvN18O
z0~H31+dV!t!FVeR3jycNSC|RF$N3?3&CIeYZWq(6xICjqvNz~a9Xp^rrhXsM88r94
z=W1HvLD&|3Q|0-CiTF_C6^+8+mNu2f5*zC+6t%2g>h>*Ki7z@9fMU)~>gqvHRynCp
zvL`qSrP{$z42tQ5)P6h#res3ogQy}@iv0=i#9Ut_3AVa9!y;3>ZsDS8SK$8h$5a_H
zq}k*S@?W@!g-wNpg;`ly)%d=E-aa}umIc|*4qy|ou4*s&-RYG;XcwolwX=JW2;H)`
zFx(}F)g(XSxwtqHk1bVr+W;v7f#o~1w#FDD{@Fpl^5kT@3)D;!E=#IJAlhju41v&$
z=Cn{NG6rMGExv<@hzRhK*m9F%CM++TB3QYsY-#e;9%bx>8R;Wl@ViE5n6@{WpyB>m
zq35&Kh{|`Hwr-w?8((V~dUsNj8noXh?nFGhV@(3|EP{@fRxWn^VmnM%cnJ_kNeR^s
zB+n@#s^?@WgwE3+7Qar-O{)$GTJFKxxr0KXo|~x?z4Hl<`bT{$U0XZ*6c(nWu712)
z*hQzVtEcB@46qsbIXL(gI6hkG^Ke2G>8c%8zHD`hVp1YLiBrKYbQg@*iP-BT23FN^
zYqsTr2CZcRb@ASZ4X8%7O