1. Tensor Overview¶
1.1. Synopsis¶
First, we will define a convenience alias:
typealias NativeStorage<Double> D
Next, we can create two vectors:
// construct vector with values
let v1 = Tensor<D>([1, 2, 3])
// construct Tensor with a specific size
let v2 = Tensor<D>(Extent(5, 3))
// construct a matrix with values
let v3 = Tensor<D>([[1, 2, 3], [4, 5, 6]])
STEM supports standard linear algebra operators:
// take the dot product (result is a scalar)
let s1 = v1⊙v2
// take the outer product (result is a matrix)
let m1 = v1⊗v2
// add two vectors together
let v4 = v1+v3
// multiply by a scalar
let v5 = 0.5*v1
STEM also supports advanced indexing (similar to Numpy and Matlab):
let v6 = v2[1..<4]
let m2 = m1[1..<4, 0..<2]
As STEM‘s name implies N-dimensional Tensors are supported. Both the Vector
and Matrix
classes are specializations of the Tensor
class. These
specializations allow for simpler construction methods as well as the
use of accelerated libraries such as CBLAS and CUDA or OpenCL
through function overloading.
1.2. Storage¶
All Tensor
s have an associated Storage
class that is responsible for
the allocated memory. The two built-in Storage
types are: NativeStorage
and CBlasStorage
. Other storage types (e.g. CUDA or OpenCL) can
be added without requiring any rewrite of the main library. Because the Storage
type determines which functions get called. If no methods have been specified
for the Storage
class, NativeStorage
will be called by default.
The Storage
protocol is defined as:
public protocol Storage {
associatedtype ElementType:NumericType
var size:Int { get }
var order:DimensionOrder { get }
init(size:Int)
init(array:[ElementType])
init(storage:Self)
init(storage:Self, copy:Bool)
subscript(index:Int) -> ElementType {get set}
// returns the order of dimensions to traverse
func calculateOrder(dims:Int) -> [Int]
// re-order list in order of dimensions to traverse
func calculateOrder(values:[Int]) -> [Int]
}
An implementation of Storage
determines the allocation through the init
methods, subscript
determines how the storage gets indexed, and calculateStride
allows the Storage
to be iterated through in a sequential fashion.
The Tensor
class frequently makes use of the generator IndexGenerator
to iterate
through the Storage
class. This provides a convenient way to access all the
elements without knowing the underyling memory allocation.
To do so, the Tensor
class defined the method:
public func indices(order:DimensionOrder?=nil) -> GeneratorSequence<IndexGenerator> {
if let o = order {
return GeneratorSequence<IndexGenerator>(IndexGenerator(shape, order: o))
} else {
return GeneratorSequence<IndexGenerator>(IndexGenerator(shape, order: storage.order))
}
}
which can be used like:
func fill<StorageType:Storage>(tensor:Tensor<StorageType>, value:StorageType.ElementType) {
for i in tensor.indices() {
tensor.storage[i] = value
}
}
However, as mentioned previously, if an optimized version for a particular Tensor
operation exists, you can write:
// This will be used if the Tensor's storage type is CBlasStorage for doubles,
// an alternative can be specified for Floats separately.
func fill(tensor:Tensor<CBlasStorage<Double>>, value:StorageType.ElementType) {
// call custom library
}
Type | Description |
---|---|
NativeStorage | Unaccelerated using row-major memory storage |
CBlasStorage | CBLAS acceleration using column-major storage |
GPUStorage | (Not Implemented) GPU acceleration using row-memory storage |
1.3. Tensor Class¶
The Tensor
class is parameterized by the Storage
type, allowing instances
of the class to maintain a pointer to the underlying memory. The Tensor
class
also has an instance of ViewType
, which allows different views of the same
memory to be constructed, and the array dimIndex
, which determines the order
that the dimensions in the Tensor
are traversed. These features allow for
multiple Tensor
s to provide a different view to the same memory (e.g. a slice
of a Tensor
can be created by changing the ViewType
instance, or a
Tensor
can be transposed by shuffling dimIndex
).
Note
Throughout the documentation Tensor<S>
indicates the parameterization of
the Tensor
class by Storage
type S
, and NumericType
refers to
S.NumericType
(see section on Storage
for details).
1.4. Tensor Construction¶
-
Tensor<S>(_ shape:Extent)
Constructs a tensor with the given shape.
-
Tensor<S>([NumericType], axis:Int)
Constructs a vector along the given axis
-
Tensor<S>(colvector:[NumericType])
Constructs a column vector (equivalent to
Tensor<S>([NumericType], axis:0)
)
-
Tensor<S>(rowvector:[NumericType])
Constructs a row vector (equivalent to
Tensor<S>([NumericType], axis:1)
)
-
Tensor<S>([[NumericType]])
Constructs a matrix
1.5. Indexing¶
STEM supports single indexing as well as slice indexing. Given a Tensor
T:
To index element (i, j, k):
let value = T[i, j, k]
T[i, j, k] = value
To index the slices (if:il, jf:jl, kf:kl):
let T2 = T[if...il, jf...jl, kf...kl]
T[if...il, jf...jl, kf...kl] = T2
1.6. Views¶
Views in STEM are instances of Tensor
that point to the same Storage
as another Tensor
but with different bounds and/or ordering of dimensions. Views
are most commonly created whenever a slice indexing is used.
A copy of a view can be made by using the copy
function.