# Eigen Tensors {#eigen_tensors}
Tensors are multidimensional arrays of elements. Elements are typically scalars,
but more complex types such as strings are also supported.
[TOC]
## Tensor Classes
You can manipulate a tensor with one of the following classes. They all are in
the namespace `::Eigen.`
### Class Tensor<data_type, rank>
This is the class to use to create a tensor and allocate memory for it. The
class is templatized with the tensor datatype, such as float or int, and the
tensor rank. The rank is the number of dimensions, for example rank 2 is a
matrix.
Tensors of this class are resizable. For example, if you assign a tensor of a
different size to a Tensor, that tensor is resized to match its new value.
#### Constructor `Tensor<data_type, rank>(size0, size1, ...)`
Constructor for a Tensor. The constructor must be passed `rank` integers
indicating the sizes of the instance along each of the the `rank`
dimensions.
// Create a tensor of rank 3 of sizes 2, 3, 4. This tensor owns
// memory to hold 24 floating point values (24 = 2 x 3 x 4).
Tensor<float, 3> t_3d(2, 3, 4);
// Resize t_3d by assigning a tensor of different sizes, but same rank.
t_3d = Tensor<float, 3>(3, 4, 3);
#### Constructor `Tensor<data_type, rank>(size_array)`
Constructor where the sizes for the constructor are specified as an array of
values instead of an explicitly list of parameters. The array type to use is
`Eigen::array<Eigen::Index>`. The array can be constructed automatically
from an initializer list.
// Create a tensor of strings of rank 2 with sizes 5, 7.
Tensor<string, 2> t_2d({5, 7});
### Class `TensorFixedSize<data_type, Sizes<size0, size1, ...>>`
Class to use for tensors of fixed size, where the size is known at compile
time. Fixed sized tensors can provide very fast computations because all their
dimensions are known by the compiler. FixedSize tensors are not resizable.
If the total number of elements in a fixed size tensor is small enough the
tensor data is held onto the stack and does not cause heap allocation and free.
// Create a 4 x 3 tensor of floats.
TensorFixedSize<float, Sizes<4, 3>> t_4x3;
### Class `TensorMap<Tensor<data_type, rank>>`
This is the class to use to create a tensor on top of memory allocated and
owned by another part of your code. It allows to view any piece of allocated
memory as a Tensor. Instances of this class do not own the memory where the
data are stored.
A TensorMap is not resizable because it does not own the memory where its data
are stored.
#### Constructor `TensorMap<Tensor<data_type, rank>>(data, size0, size1, ...)`
Constructor for a Tensor. The constructor must be passed a pointer to the
storage for the data, and "rank" size attributes. The storage has to be
large enough to hold all the data.
// Map a tensor of ints on top of stack-allocated storage.
int storage[128]; // 2 x 4 x 2 x 8 = 128
TensorMap<Tensor<int, 4>> t_4d(storage, 2, 4, 2, 8);
// The same storage can be viewed as a different tensor.
// You can also pass the sizes as an array.
TensorMap<Tensor<int, 2>> t_2d(storage, 16, 8);
// You can also map fixed-size tensors. Here we get a 1d view of
// the 2d fixed-size tensor.
TensorFixedSize<float, Sizes<4, 3>> t_4x3;
TensorMap<Tensor<float, 1>> t_12(t_4x3.data(), 12);
#### Class `TensorRef`
See Assigning to a TensorRef below.
## Accessing Tensor Elements
#### `<data_type> tensor(index0, index1...)`
Return the element at position `(index0, index1...)` in tensor
`tensor`. You must pass as many parameters as the rank of `tensor`.
The expression can be used as an l-value to set the value of the element at the
specified position. The value returned is of the datatype of the tensor.
// Set the value of the element at position (0, 1, 0);
Tensor<float, 3> t_3d(2, 3, 4);
t_3d(0, 1, 0) = 12.0f;
// Initialize all elements to random values.
for (int i = 0; i < 2; ++i) {
for (int j = 0; j < 3; ++j) {
for (int k = 0; k < 4; ++k) {
t_3d(i, j, k) = ...some random value...;
}
}
}
// Print elements of a tensor.
for (int i = 0; i < 2; ++i) {
LOG(INFO) << t_3d(i, 0, 0);
}
## TensorLayout
The tensor library supports 2 layouts: `ColMajor` (the default) and
`RowMajor`. Only the default column major layout is currently fully
supported, and it is therefore not recommended to attempt to use the row major
layout at the moment.
The layout of a tensor is optionally specified as part of its type. If not
specified explicitly column major is assumed.
Tensor<float, 3, ColMajor> col_major; // equivalent to Tensor<float, 3>
TensorMap<Tensor<float, 3, RowMajor> > row_major(data, ...);
All the arguments to an expression must use the same layout. Attempting to mix
different layouts will result in a compilation error.
It is possible to change the layout of a tensor or an expression using the
`swap_layout()` method. Note that this will also reverse the order of the
dimensions.
Tensor<float, 2, ColMajor> col_major(2, 4);
Tensor<float, 2, RowMajor> row_major(2, 4);
Tensor<float, 2> col_major_result = col_major; // ok, layouts match
Tensor<float, 2> col_major_result = row_major; // will not compile
// Simple layout swap
col_major_result = row_major.swap_layout();
eigen_assert(col_major_result.dimension(0) == 4);
eigen_assert(col_major_result.dimension(1) == 2);
// Swap the layout and preserve the order of the dimensions
array<int, 2> shuffle(1, 0);
col_major_result = row_major.swap_layout().shuffle(shuffle);
eigen_assert(col_major_result.dimension(0) == 2);
eigen_assert(col_major_result.dimension(1) == 4);
## Tensor Operations
The Eigen Tensor library provides a vast library of operations on Tensors:
numerical operations such as addition and multiplication, geometry operations
such as slicing and shuffling, etc. These operations are available as methods
of the Tensor classes, and in some cases as operator overloads. For example
the following code computes the elementwise addition of two tensors:
Tensor<float, 3> t1(2, 3, 4);
...set some values in t1...
Tensor<float, 3> t2(2, 3, 4);
...set some values in t2...
// Set t3 to the element wise sum of t1 and t2
Tensor<float, 3> t3 = t1 + t2;
While the code above looks easy enough, it is important to understand that the
expression `t1 + t2` is not actually adding the values of the tensors. The
expression instead constructs a "tensor operator" object of the class
TensorCwiseBinaryOp<scalar_sum>, which has references to the tensors
`t1` and `t2`. This is a small C++ object that knows how to add
`t1` and `t2`. It is only when the value of the expression is assigned
to the tensor `t3` that the addition is actually performed. Technically,
this happens through the overloading of `operator=()` in the Tensor class.
This mechanism for computing tensor expressions allows for lazy evaluation and
optimizations which are what make the tensor library very fast.
Of course, the tensor operators do nest, and the expression `t1 + t2 * 0.3f`
is actually represented with the (approximate) tree of operators:
TensorCwiseBinaryOp<scalar_sum>(t1, TensorCwiseUnaryOp<scalar_mul>(t2, 0.3f))
### Tensor Operations and C++ "auto"
Because Tensor operations create tensor operators, the C++ `auto` keyword
does not have its intuitive meaning. Consider these 2 lines of code:
Tensor<float, 3> t3 = t1 + t2;
auto t4 = t1 + t2;
In the first line we allocate the tensor `t3` and it will contain the
result of the addition of `t1` and `t2`. In the second line, `t4`
is actually the tree of tensor operators that will compute the addition of
`t1` and `t2`. In fact, `t4` is *not* a tensor and you cannot get
the values of its elements:
Tensor<float, 3> t3 = t1 + t2;
cout << t3(0, 0, 0); // OK prints the value of t1(
快撑死的鱼
- 粉丝: 2w+
- 资源: 9156
最新资源
- kde-settings-plasma-19-23.12.el7.centos.x64-86.rpm.tar.gz
- kde-settings-pulseaudio-19-23.12.el7.centos.x64-86.rpm.tar.gz
- 西门子1200系列PLC控制汽车零部件压装工艺,精准采集压力位移数据,简化复杂工艺流程,内置顺序控制器V14、V15版本升级程序,西门子1200系列plc程序,设备为汽车零部件,一个工件共有12个压装
- 流水线贴膜机PLC与触摸屏程序完成,含多种控制功能,适合初学者学习,支持博图V13及以上版本,流水线贴膜机完成项目程序,包含PLC程序和触摸屏程序,程序内 包含上下气缸控制,夹紧气缸控制,输送带电机控
- Java毕设项目:基于SpringBoot+mybatis+maven+mysql实现的旅游网站管理系统【含源码+数据库+开题报告+答辩PPT+毕业论文】
- kde-style-oxygen-4.11.19-16.el7-9.x64-86.rpm.tar.gz
- 基于三菱PLC与组态王组态的矿井排水三泵控制系统:详细梯形图、接线图与组态画面实现方案,基于三菱PLC和组态王组态3泵矿井排水控制系统三泵 带解释的梯形图程序,接线图原理图图纸,io分配,组态画面
- 永磁同步电机高频方波电压注入法的仿真研究:位置估算与零低速性能优化,永磁同步电机高频方波电压注入法(V0) 本仿真为离散模型,主要有 1.方波信号施加在旋转坐标系DQ轴系下 2.方波频率最高取开
- 松下FPXH自动螺丝机程序详解:昆仑通态触摸屏控制,清晰分块编写,带配方,易读易懂,高效定位与操作,松下FPXH自动螺丝机程序 昆仑通态触摸屏控触摸,松 下FPXH数据表定位模式,写法新颖,程
- "基于卡尔曼滤波技术的车辆状态高精度联合估计-Carsim与Simulink深度融合的实验与挑战",基于扩展卡尔曼滤波(Extended Kalman Filter, EKF)的车辆状态观测器 Ca
- 永磁同步电机凸极变交轴弱磁控制技术研究与仿真模型赠送,dq轴电流优化改进探讨,永磁同步电机(凸极)-变交轴弱磁控制 资料包含仿真和相关文献资料,赠送仿真基础模型 dq轴电流跟踪效果不佳,可在此基础上做
- Matlab演示的十大时间序列预测模型:自回归到GJR模型,涵盖多种经典预测策略,matlab10种经典的时间序列预测模型 本文演示了 10 种不同的经典时间序列预测方法,它们是 1) 自回归 (
- "灰狼算法优化SVR参数c和g的多维输入单维输出回归预测模型精度提升及性能评估",基于灰狼算法优化SVR的c和g参数,实现多维输入单维输出的回归预测模型,提高模型的预测精度,具体的效果如下,同时模型可
- 基于麻雀搜索算法SSA优化BP神经网络模型的权阈值,提高多维输入预测精度及评估报告打印功能,利用麻雀搜索算法SSA优化BP神经网络模型的权值和阈值,提高模型的预测精度,该模型可以做多维输入一维输出的预
- 自适应模糊神经网络高预测精度实现:结合最小二乘与反向传播算法自适应率优化,自适应模糊神经网络做预测,最小二乘和反向传播算法实现自适应率,预测精度非常高 ,核心关键词:自适应模糊神经网络; 最小二乘算法
- 简易串口调试助手:Qt C++编程实现十六进制发送接收与自动回环功能,详细注释适合初学者使用,简易串口调试助手源代码串口通信程序源代码带十六进制发送接收带自动回环功能注释详细适合初学者 Qt库C++语
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈