Builtins and Libraries
3.1 Global Utilities
3.2 Numbers
3.3 Strings
3.4 Booleans
3.5 Raw  Array
3.6 Tables
3.7 lists
3.8 sets
3.9 arrays
3.10 string-dict
3.11 option
3.12 pick
3.13 either
3.14 srcloc
3.15 pprint
3.16 s-exp
3.17 s-exp-structs
3.18 image-structs
3.19 image
3.20 world
3.21 gdrive-sheets
3.22 data-source
3.23 reactors
3.24 chart
3.25 plot
3.26 statistics
3.27 math
3.28 tensorflow
3.28 tensorflow
On this page:
3.28.1 Tensors
Tensor
3.28.1.1 Tensor Constructors
tensor
is-tensor
list-to-tensor
make-scalar
fill
linspace
ones
zeros
multinomial
random-normal
random-uniform
make-variable
3.28.1.2 Tensor Methods
.size
.shape
.flatten
.as-scalar
.as-1d
.as-2d
.as-3d
.as-4d
.as-type
.data-now
.to-float
.to-int
.to-bool
.to-buffer
.to-variable
.reshape
.expand-dims
.squeeze
.clone
.add
.subtract
.multiply
.divide
.floor-divide
.max
.min
.modulo
.expt
.squared-difference
3.28.2 Tensor Operations
3.28.2.1 Arithmetic Operations
add-tensors
subtract-tensors
multiply-tensors
divide-tensors
floor-divide-tensors
tensor-max
tensor-min
tensor-modulo
tensor-expt
squared-difference
strict-add-tensors
strict-subtract-tensors
strict-multiply-tensors
strict-divide-tensors
strict-tensor-max
strict-tensor-min
strict-tensor-expt
strict-tensor-modulo
strict-squared-difference
3.28.2.2 Trigonometry Operations
tensor-acos
tensor-acosh
tensor-asin
tensor-asinh
tensor-atan
tensor-atan2
tensor-atanh
tensor-cos
tensor-cosh
tensor-sin
tensor-sinh
tensor-tan
tensor-tanh
3.28.2.3 Math Operations
tensor-abs
tensor-ceil
clip-by-value
exponential-linear-units
elu
gauss-error
erf
tensor-exp
tensor-exp-min1
tensor-floor
leaky-relu
tensor-log
tensor-log-plus1
log-sigmoid
tensor-negate
parametric-relu
tensor-reciprocal
relu
tensor-round
reciprocal-sqrt
scaled-elu
sigmoid
signed-ones
softplus
tensor-sqrt
tensor-square
step
3.28.2.4 Reduction Operations
arg-max
arg-min
log-sum-exp
reduce-all
reduce-any
reduce-max
reduce-min
reduce-mean
reduce-sum
3.28.2.5 Slicing and Joining Operations
concatenate
gather
reverse
slice
split
stack
tile
unstack
strided-slice
3.28.3 Tensor  Buffers
Tensor  Buffer
is-tensor-buffer
3.28.3.1 Tensor  Buffer Constructors
make-buffer
3.28.3.2 Tensor  Buffer Methods
.size
.shape
.set-now
.get-now
.get-all-now
.to-tensor
3.28.4 Models
3.28.4.1 Generic Models
Model
is-model
make-model
3.28.4.2 Sequential Models
Sequential
is-sequential
make-sequential
.add
.compile
.evaluate
.predict
.predict-on-batch
.fit
3.28.5 Symbolic  Tensors
Symbolic  Tensor
is-symbolic-tensor
3.28.5.1 Symbolic  Tensor Constructors
make-input
make-batch-input
3.28.5.2 Symbolic  Tensor Methods
.shape
3.28.6 Layers
Layer
is-layer
3.28.6.1 Layer-Specific Datatypes
Layer  Config
Activation
Initializer
Constraint
Regularizer
Data  Format
Padding  Method
3.28.6.2 Basic Layers
activation-layer
dense-layer
dropout-layer
embedding-layer
flatten-layer
repeat-vector-layer
reshape-layer
3.28.6.3 Convolutional Layers
conv-1d-layer
conv-2d-layer
conv-2d-transpose-layer
cropping-2d-layer
depthwise-conv-2d-layer
separable-conv-2d-layer
up-sampling-2d-layer
3.28.6.4 Merge Layers
add-layer
average-layer
concatenate-layer
maximum-layer
minimum-layer
multiply-layer
3.28.6.5 Normalization Layers
batch-normalization-layer
3.28.6.6 Pooling Layers
average-pooling-1d-layer
average-pooling-2d-layer
global-average-pooling-1d-layer
global-average-pooling-2d-layer
global-max-pooling-1d-layer
global-max-pooling-2d-layer
max-pooling-1d-layer
max-pooling-2d-layer
3.28.6.7 Recurrent Layers
gru-layer
gru-cell-layer
lstm-layer
lstm-cell-layer
rnn-layer
simple-rnn-layer
simple-rnn-cell-layer
stacked-rnn-cells-layer
3.28.6.8 Wrapper Layers
bidirectional-layer
time-distributed-layer
3.28.7 Optimizers
Optimizer
is-optimizer
3.28.7.1 Optimizer Constructors
train-sgd
train-momentum
train-adagrad
train-adadelta
train-adam
train-adamax
train-rmsprop
3.28.7.2 Optimizer Methods
.minimize
6.12

3.28 tensorflow

Usage:
include tensorflow
import tensorflow as ...
A module that provides a Pyret interface for TensorFlow, a symbolic math library for machine learning applications.

    3.28.1 Tensors

      3.28.1.1 Tensor Constructors

      3.28.1.2 Tensor Methods

    3.28.2 Tensor Operations

      3.28.2.1 Arithmetic Operations

      3.28.2.2 Trigonometry Operations

      3.28.2.3 Math Operations

      3.28.2.4 Reduction Operations

      3.28.2.5 Slicing and Joining Operations

    3.28.3 TensorBuffers

      3.28.3.1 TensorBuffer Constructors

      3.28.3.2 TensorBuffer Methods

    3.28.4 Models

      3.28.4.1 Generic Models

      3.28.4.2 Sequential Models

    3.28.5 SymbolicTensors

      3.28.5.1 SymbolicTensor Constructors

      3.28.5.2 SymbolicTensor Methods

    3.28.6 Layers

      3.28.6.1 Layer-Specific Datatypes

      3.28.6.2 Basic Layers

      3.28.6.3 Convolutional Layers

      3.28.6.4 Merge Layers

      3.28.6.5 Normalization Layers

      3.28.6.6 Pooling Layers

      3.28.6.7 Recurrent Layers

      3.28.6.8 Wrapper Layers

    3.28.7 Optimizers

      3.28.7.1 Optimizer Constructors

      3.28.7.2 Optimizer Methods

3.28.1 Tensors

Tensors are the core datastructure for tensorflow applications. They are a generalization of vectors and matrices that allows for higher dimensions.

For example, a tensor could be a one-dimensional matrix (a vector), a three-dimensional matrix (a cube), a zero-dimensional matrix (a single number), or a higher dimensional structure that is more difficult to visualize.

This is because TensorFlow.js (the library that the tensorflow library is built on) stores Tensor values in JavaScript Float32Arrays for performance reasons.

For performance reasons, Tensors do not support arbitrary precision. Retrieving values from a Tensor using .data-now always returns a List<Roughnum>.

Since Tensors are immutable, all operations always return new Tensors and never modify the input Tensors. The exception to this is when a Tensor is transformed into a mutable Tensor using the make-variable function or the .to-variable method. These "variable tensors" can be modified by Optimizers.

3.28.1.1 Tensor Constructors
[tensor: value :: Number, ...] -> Tensor

Creates a new Tensor with the given values.

Every Tensor created with this constructor is one-dimensional. Use .as-1d, .as-2d, .as-3d, .as-4d, or .reshape to change the shape of a Tensor after instantiating it.

Examples:

[tensor: 1, 2, 3] # a size-3 tensor [tensor: 1.4, 5.2, 0.4, 12.4, 14.3, 6].as-2d(3, 2) # a 3 x 2 tensor [tensor: 9, 4, 0, -32, 23, 1, 3, 2].as-3d(2, 2, 2) # a 2 x 2 x 2 tensor

is-tensor :: (val :: Any) -> Boolean

Returns true if val is a Tensor; otherwise, returns false.

Examples:

check: is-tensor([tensor: 1, 2, 3]) is true is-tensor(true) is false is-tensor(0) is false is-tensor([list: 1, 2, 3]) is false end

list-to-tensor :: (values :: List<Number>) -> Tensor

Creates a new Tensor with the values in the input List.

Similar to the tensor constructor, all Tensors created using list-to-tensor are one-dimensional by default. Use .as-1d, .as-2d, .as-3d, .as-4d, or .reshape to change the shape of a Tensor after instantiating it.

Examples:

check: list-to-tensor(empty) satisfies is-tensor list-to-tensor([list: 5, 3, 4, 7]) satisfies is-tensor list-to-tensor(empty).data-now() is empty list-to-tensor([list: 9, 3, 2, 3]).data-now() is-roughly [list: 9, 3, 2, 3] list-to-tensor([list: 3, 2, 1, 0, 4, 9]).as-2d(2, 3).shape() is [list: 2, 3] end

make-scalar :: (value :: Number) -> Tensor

Creates a new Tensor of rank-0 with the given value.

The same functionality can be achieved with the tensor constructor and the .as-scalar method, but it’s recommended to use make-scalar as it makes the code more readable.

Examples:

check: make-scalar(1).size() is 1 make-scalar(~12.3).shape() is empty make-scalar(2.34).data-now() is-roughly [list: 2.34] end

fill :: (shape :: List<NumInteger>, value :: Number) -> Tensor

Creates a Tensor with the input shape where all of the entries are value.

Examples:

check: fill([list: 0], 1).data-now() is-roughly [list: ] fill([list: 3], 5).data-now() is-roughly [list: 5, 5, 5] fill([list: 3, 2], -3).data-now() is-roughly [list: -3, -3, -3, -3, -3, -3] end

linspace :: (
start :: Number,
stop :: Number,
num-values :: Number
)
-> Tensor

Returns a Tensor whose values are an evenly spaced sequence of numbers over the range [start, stop]. num-values is the number of entries in the output Tensor.

Examples:

check: linspace(0, 3, 1).data-now() is-roughly [list: 0] linspace(10, 11, 1).data-now() is-roughly [list: 10] linspace(5, 1, 5).data-now() is-roughly [list: 5, 4, 3, 2, 1] linspace(0, 9, 10).data-now() is-roughly [list: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9] linspace(0, 4, 9).data-now() is-roughly [list: 0, 0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4] end

ones :: (shape :: List<NumInteger>) -> Tensor

Returns a Tensor with the given shape where all of the entries are ones.

Examples:

check: ones([list: 0]).data-now() is-roughly [list: ] ones([list: 4]).data-now() is-roughly [list: 1, 1, 1, 1] two-dim = ones([list: 3, 2]) two-dim.shape() is [list: 3, 2] two-dim.data-now() is-roughly [list: 1, 1, 1, 1, 1, 1] end

zeros :: (shape :: List<NumInteger>) -> Tensor

Returns a Tensor with the given shape where all of the entries are zeros.

Examples:

check: zeros([list: 0]).data-now() is-roughly [list: ] zeros([list: 4]).data-now() is-roughly [list: 0, 0, 0, 0] two-dim = zeros([list: 3, 2]) two-dim.shape() is [list: 3, 2] two-dim.data-now() is-roughly [list: 0, 0, 0, 0, 0, 0] end

multinomial :: (
logits :: Tensor,
num-samples :: NumPositive,
seed :: Option<Number>,
is-normalized :: Boolean
)
-> Tensor

Creates a new Tensor where all of the values are sampled from a multinomial distribution.

logits should be a Tensor representing a one-dimensional array containing with unnormalized log-probabilities, or a two-dimensional array of structure [batch-size, num-outcomes].

num-samples is the number of samples to draw for each row slice. seed represents the random seed to use when generating values; if none, the seed is randomly generated. normalized designates whether or not the provided logits are normalized true probabilities (i.e: they sum to 1).

Examples:

check: three-dim = [tensor: 1, 1, 1, 1, 1, 1, 1, 1].as-3d(2, 2, 2) multinomial(three-dim, 2, none, false) raises "must be a one-dimensional or two-dimensional Tensor" multinomial([tensor: ], 1, none, false) raises "must have at least two possible outcomes" multinomial([tensor: 0.8], 7, none, false) raises "must have at least two possible outcomes" multinomial([tensor: 1.0, 0.0], 1, none, true).shape() is [list: 1] multinomial([tensor: 1.0, 0.0], 3, none, true).shape() is [list: 3] multinomial([tensor: 0.3, 0.5, 0.7], 10, none, false).shape() is [list: 10] end

random-normal :: (
shape :: List<NumInteger>,
mean :: Option<Number>,
standard-deviation :: Option<Number>
)
-> Tensor

Creates a new Tensor with the given shape (represented as values in the input List<Number> shape) where all of the values are sampled from a normal distribution.

mean is the mean of the normal distribution and standard-deviation is the standard deviation of the normal distribution. If none, the respective parameters are set to the TensorFlow.js defaults.

Examples:

check: random-normal(empty, none, none).size() is 1 random-normal(empty, none, none).shape() is empty random-normal([list: 4, 3], none, none).shape() is [list: 4, 3] random-normal([list: 2, 5, 3], none, none).shape() is [list: 2, 5, 3] end

random-uniform :: (
shape :: List<NumInteger>,
min-val :: Option<Number>,
max-val :: Option<Number>
)
-> Tensor

Creates a new Tensor with the given shape (represented as values in the input List) where all of the values are sampled from a uniform distribution.

min-val is the lower bound on the range of random values to generate and max-val is the upper bound on the range of random values to generate. If none, the respective parameters are set to the TensorFlow.js defaults.

Examples:

check: random-uniform(empty, none, none).size() is 1 random-uniform(empty, none, none).shape() is empty random-uniform([list: 1, 3], none, none).shape() is [list: 1, 3] random-uniform([list: 5, 4, 8], none, none).shape() is [list: 5, 4, 8] lower-bound = 1 upper-bound = 10 random-data = random-uniform([list: 20], some(lower-bound), some(upper-bound)) for each(data-point from random-data.data-now()): data-point satisfies lam(x): (x >= lower-bound) and (x <= upper-bound) end end end

make-variable :: (initial-value :: Tensor) -> Tensor

Creates a new, mutable Tensor initialized to the values of the input Tensor.

The same functionality can be achieved with the .to-variable method.

Examples:

check: make-variable([tensor: ]).data-now() is-roughly empty make-variable([tensor: 1]).data-now() is-roughly [list: 1] # We can perform normal Tensor operations on mutable Tensors: two-dim = [tensor: 4, 5, 3, 9].as-2d(2, 2) make-variable(two-dim).size() is 4 make-variable(two-dim).shape() is [list: 2, 2] make-variable(two-dim).data-now() is-roughly [list: 4, 5, 3, 9] make-variable(two-dim).as-3d(4, 1, 1).shape() is [list: 4, 1, 1] end

3.28.1.2 Tensor Methods

.size :: () -> Number

Returns the size of the Tensor (the number of values stored in the Tensor).

Examples:

check: make-scalar(4.21).size() is 1 [tensor: 6.32].size() is 1 [tensor: 1, 2, 3].size() is 3 [tensor: 1.4, 5.2, 0.4, 12.4, 14.3, 6].as-2d(3, 2).size() is 6 end

.shape :: () -> List<NumInteger>

Returns a List<NumInteger> representing the shape of the Tensor. Each element in the List<NumInteger> corresponds to the size in each dimension.

Examples:

check: make-scalar(3).shape() is empty [tensor: 9].shape() is [list: 1] [tensor: 8, 3, 1].shape() is [list: 3] [tensor: 0, 0, 0, 0, 0, 0].as-2d(3, 2).shape() is [list: 3, 2] end

.flatten :: () -> Tensor

Constructs a new, one-dimensional Tensor from the values of the original Tensor.

Examples:

check: a = [tensor: 1, 2, 3, 4, 5, 6].as-2d(3, 2) a.shape() is [list: 3, 2] a.flatten().shape() is [list: 6] b = make-scalar(12) b.shape() is empty b.flatten().shape() is [list: 1] end

.as-scalar :: () -> Tensor

Constructs a new, zero-dimensional Tensor from the values of the original, size-1 Tensor.

Raises an error if the calling Tensor is not size-1.

Examples:

check: size-one = [tensor: 1] size-one.as-scalar().shape() is empty size-one.shape() is [list: 1] # doesn't modify shape of original tensor size-two = [tensor: 1, 2] size-two.as-scalar() raises "Tensor was size-2 but `as-scalar` requires the tensor to be size-1" end

.as-1d :: () -> Tensor

Constructs a new, rank-1 Tensor from the values of the original Tensor.

The same functionality can be achieved with .reshape, but it’s recommended to use .as-1d as it makes the code more readable.

Examples:

check: one-dim = [tensor: 1] two-dim = [tensor: 4, 3, 2, 1].as-2d(2, 2) three-dim = [tensor: 0, 1, 2, 3, 4, 5, 6, 7, 8].as-3d(3, 1, 3) one-dim.shape() is [list: 1] one-dim.as-1d().shape() is [list: 1] two-dim.shape() is [list: 2, 2] two-dim.as-1d().shape() is [list: 4] three-dim.shape() is [list: 3, 1, 3] three-dim.as-1d().shape() is [list: 9] end

.as-2d :: (rows :: NumInteger, columns :: NumInteger) -> Tensor

Constructs a new, rank-2 Tensor with the input dimensions from the values of the original Tensor.

The number of elements implied by the input dimensions must be the same as the number of elements in the calling Tensor. Otherwise, the method raises an error.

The same functionality can be achieved with .reshape, but it’s recommended to use .as-2d as it makes the code more readable.

Examples:

check: one-dim = [tensor: 1] two-dim = [tensor: 0, 1, 2, 3, 4, 5].as-2d(3, 2) three-dim = [tensor: 4, 3, 2, 1, 0, -1, -2, -3].as-3d(2, 2, 2) one-dim.shape() is [list: 1] one-dim.as-2d(1, 1).shape() is [list: 1, 1] two-dim.shape() is [list: 3, 2] two-dim.as-2d(2, 3).shape() is [list: 2, 3] three-dim.shape() is [list: 2, 2, 2] three-dim.as-2d(4, 2).shape() is [list: 4, 2] one-dim.as-2d(2, 1) raises "Cannot reshape" two-dim.as-2d(3, 3) raises "Cannot reshape" three-dim.as-2d(5, 4) raises "Cannot reshape" end

.as-3d :: (
rows :: NumInteger,
columns :: NumInteger,
depth :: NumInteger
)
-> Tensor

Constructs a new, rank-3 Tensor with the input dimensions from the values of the original Tensor.

The number of elements implied by the input dimensions must be the same as the number of elements in the calling Tensor. Otherwise, the method raises an error.

The same functionality can be achieved with .reshape, but it’s recommended to use .as-3d as it makes the code more readable.

Examples:

check: one-dim = [tensor: 1] two-dim = [tensor: 0, 1, 2, 3, 4, 5, 6, 7].as-2d(4, 2) one-dim.shape() is [list: 1] one-dim.as-3d(1, 1, 1).shape() is [list: 1, 1, 1] two-dim.shape() is [list: 4, 2] two-dim.as-3d(2, 2, 2).shape() is [list: 2, 2, 2] one-dim.as-3d(2, 1, 1) raises "Cannot reshape" two-dim.as-3d(4, 3, 2) raises "Cannot reshape" end

.as-4d :: (
rows :: NumInteger,
columns :: NumInteger,
depth1 :: NumInteger,
depth2 :: NumInteger
)
-> Tensor

Constructs a new, rank-4 Tensor with the input dimensions from the values of the original Tensor.

The number of elements implied by the input dimensions must be the same as the number of elements in the calling Tensor. Otherwise, the method raises an error.

The same functionality can be achieved with .reshape, but it’s recommended to use .as-4d as it makes the code more readable.

Examples:

check: one-dim = [tensor: 1] two-dim = [tensor: 0, 1, 2, 3, 4, 5, 6, 7].as-2d(4, 2) one-dim.shape() is [list: 1] one-dim.as-4d(1, 1, 1, 1).shape() is [list: 1, 1, 1, 1] two-dim.shape() is [list: 4, 2] two-dim.as-4d(2, 2, 1, 2).shape() is [list: 2, 2, 1, 2] one-dim.as-4d(2, 1, 1, 1) raises "Cannot reshape" two-dim.as-4d(2, 2, 2, 2) raises "Cannot reshape" end

.as-type :: (data-type :: String) -> Tensor

Constructs a new Tensor from the values of the original Tensor with all of the values cast to the input datatype.

The possible data-types are "float32", "int32", or "bool". Any other data-type will raise an error.

Examples:

check: some-tensor = [tensor: 1, 3, 5, 8] some-tensor.as-type("float32") does-not-raise some-tensor.as-type("int32") does-not-raise some-tensor.as-type("bool") does-not-raise some-tensor.as-type("invalid") raises "Attempted to cast tensor to invalid type" end

.data-now :: () -> List<Roughnum>

Returns a List containing the data in the Tensor.

Examples:

check: [tensor: ].data-now() is-roughly [list: ] [tensor: 1].data-now() is-roughly [list: 1] [tensor: 1.43].data-now() is-roughly [list: 1.43] [tensor: -3.21, 9.4, 0.32].data-now() is-roughly [list: -3.21, 9.4, 0.32] end

.to-float :: () -> Tensor

Constructs a new Tensor from the values of the original Tensor with all of the values cast to the "float32" datatype.

Examples:

check: [tensor: 0].to-float().data-now() is-roughly [list: 0] [tensor: 1].to-float().data-now() is-roughly [list: 1] [tensor: 0.42].to-float().data-now() is-roughly [list: 0.42] [tensor: 4, 0.32, 9.40, 8].to-float().data-now() is-roughly [list: 4, 0.32, 9.40, 8] end

.to-int :: () -> Tensor

Constructs a new Tensor from the values of the original Tensor with all of the values cast to the "int32" datatype.

Examples:

check: [tensor: 0].to-int().data-now() is-roughly [list: 0] [tensor: 1].to-int().data-now() is-roughly [list: 1] [tensor: 0.999999].to-int().data-now() is-roughly [list: 0] [tensor: 1.52, 4.12, 5.99].to-int().data-now() is-roughly [list: 1, 4, 5] end

.to-bool :: () -> Tensor

Constructs a new Tensor from the values of the original Tensor with all of the values cast to the "bool" datatype.

Examples:

check: [tensor: 0].to-bool().data-now() is-roughly [list: 0] [tensor: 1].to-bool().data-now() is-roughly [list: 1] [tensor: 0.42].tox-bool().data-now() is-roughly [list: 1] [tensor: 1, 4, 5].to-bool().data-now() is-roughly [list: 1, 1, 1] end

.to-buffer :: () -> TensorBuffer

Constructs a new TensorBuffer from the values of the original Tensor.

Examples:

check: empty-buffer = [tensor: ].to-buffer() empty-buffer satisfies is-tensor-buffer empty-buffer.get-all-now() is-roughly [list: ] some-shape = [list: 2, 2] some-values = [list: 4, 5, 9, 3] some-tensor = list-to-tensor(some-values).reshape(some-shape) some-buffer = some-tensor.to-buffer() some-buffer satisfies is-tensor-buffer some-buffer.get-all-now() is-roughly some-values some-buffer.to-tensor().shape() is some-shape end

.to-variable :: () -> Tensor

Constructs a new, mutable Tensor from the values of the original Tensor. Equivalent to applying make-variable on the calling Tensor.

Examples:

check: [tensor: ].to-variable() does-not-raise [tensor: 4, 5, 1].to-variable() does-not-raise [tensor: 0, 5, 1, 9, 8, 4].as-2d(3, 2).to-variable() does-not-raise end

.reshape :: (new-shape :: List<NumInteger>) -> Tensor

Constructs a new Tensor with the input dimensions new-shape from the values of the original Tensor.

The number of elements implied by new-shape must be the same as the number of elements in the calling Tensor. Otherwise, the method raises an error.

When reshaping a Tensor to be 0-, 1-, 2-, 3-, or 4-dimensional, it’s recommended to use .as-scalar, .as-1d, .as-2d, .as-3d, or .as-4d as they make the code more readable.

Examples:

check: [tensor: ].reshape([list: ]) raises "Cannot reshape" [tensor: 3, 2].reshape([list: ]) raises "Cannot reshape" [tensor: 3, 2].reshape([list: 6]) raises "Cannot reshape" [tensor: 3, 2, 1].reshape([list: 2, 4]) raises "Cannot reshape" [tensor: 1].reshape([list: 1]).shape() is [list: 1] [tensor: 1].reshape([list: 1, 1, 1]).shape() is [list: 1, 1, 1] [tensor: 1].reshape([list: 1, 1, 1, 1, 1]).shape() is [list: 1, 1, 1, 1, 1] [tensor: 1, 4].reshape([list: 2, 1]).shape() is [list: 2, 1] [tensor: 1, 4, 4, 5, 9, 3].reshape([list: 3, 2]).shape() is [list: 3, 2] end

.expand-dims :: (axis :: Option<NumInteger>) -> Tensor

Returns a Tensor that has expanded rank, by inserting a dimension into the Tensor’s shape at the given dimension index axis. If axis is none, the method inserts a dimension at index 0 by default.

Examples:

check: one-dim = [tensor: 1, 2, 3, 4] one-dim.shape() is [list: 4] one-dim.expand-dims(none).shape() is [list: 1, 4] one-dim.expand-dims(some(1)).shape() is [list: 4, 1] one-dim.expand-dims(some(2)) raises "input axis must be less than or equal to the rank of the tensor" end

.squeeze :: (axes :: Option<List<NumInteger>>) -> Tensor

Returns a Tensor with dimensions of size 1 removed from the shape.

If axes is not none, the method only squeezes the dimensions listed as indices in axes. The method will raise an error if one of the dimensions specified in axes is not of size 1.

Examples:

check: multi-dim = [tensor: 1, 2, 3, 4].reshape([list: 1, 1, 1, 4, 1]) multi-dim.shape() is [list: 1, 1, 1, 4, 1] multi-dim.squeeze(none).shape() is [list: 4] multi-dim.squeeze(some([list: 0])).shape() is [list: 1, 1, 4, 1] multi-dim.squeeze(some([list: 4])).shape() is [list: 1, 1, 1, 4] multi-dim.squeeze(some([list: 1, 2])).shape() is [list: 1, 4, 1] multi-dim.squeeze(some([list: 7])) raises "Cannot squeeze axis 7 since the axis does not exist" multi-dim.squeeze(some([list: 3])) raises "Cannot squeeze axis 3 since the dimension of that axis is 4, not 1" end

.clone :: () -> Tensor

Constructs a new Tensor that is a copy of the original Tensor.

Examples:

check: some-tensor = [tensor: 1, 2, 3, 4] new-tensor = some-tensor.clone() new-tensor.size() is some-tensor.size() new-tensor.shape() is some-tensor.shape() new-tensor.data-now() is-roughly some-tensor.data-now() end

.add :: (x :: Tensor) -> Tensor

Adds x to the Tensor. This is equivalent to add-tensors(self, x).

Examples:

check: [tensor: 1].add([tensor: 1]).data-now() is-roughly [list: 2] [tensor: 1, 3].add([tensor: 1]).data-now() is-roughly [list: 2, 4] [tensor: 1, 3].add([tensor: 5, 1]).data-now() is-roughly [list: 6, 4] [tensor: 1, 3, 4].add([tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" end

.subtract :: (x :: Tensor) -> Tensor

Subtracts x from the Tensor. This is equivalent to subtract-tensors(self, x).

Examples:

check: [tensor: 1].subtract([tensor: 1]).data-now() is-roughly [list: 0] [tensor: 1, 3].subtract([tensor: 1]).data-now() is-roughly [list: 0, 2] [tensor: 1, 3].subtract([tensor: 5, 1]).data-now() is-roughly [list: -4, 2] [tensor: 1, 3, 4].subtract([tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" end

.multiply :: (x :: Tensor) -> Tensor

Multiplies the Tensor by x. This is equivalent to multiply-tensors(self, x).

Examples:

check: [tensor: 1].multiply([tensor: 1]).data-now() is-roughly [list: 1] [tensor: 1, 3].multiply([tensor: 1]).data-now() is-roughly [list: 1, 3] [tensor: 1, 3].multiply([tensor: 5, 1]).data-now() is-roughly [list: 5, 3] [tensor: 1, 3, 4].multiply([tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" end

.divide :: (x :: Tensor) -> Tensor

Divides the Tensor by x. This is equivalent to divide-tensors(self, x).

Examples:

check: [tensor: 1].divide([tensor: 1]).data-now() is-roughly [list: 1] [tensor: 1, 3].divide([tensor: 1]).data-now() is-roughly [list: 1, 3] [tensor: 1, 3].divide([tensor: 5, 1]).data-now() is-roughly [list: 0.2, 3] [tensor: 1, 3, 4].divide([tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" [tensor: 1].divide([tensor: 0]) raises "The argument Tensor cannot contain 0" [tensor: 4.23].divide([tensor: 7.65, 1.43, 0, 2.31]) raises "The argument Tensor cannot contain 0" end

.floor-divide :: (x :: Tensor) -> Tensor

Divides the Tensor by x, with the result rounded with the floor function. This is equivalent to floor-divide-tensors(self, x).

Examples:

check: [tensor: 1].floor-divide([tensor: 1]).data-now() is-roughly [list: 1] [tensor: 1, 3].floor-divide([tensor: 1]).data-now() is-roughly [list: 1, 3] [tensor: 1, 3].floor-divide([tensor: 5, 1]).data-now() is-roughly [list: 0, 3] [tensor: 1, 3, 4].floor-divide([tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" [tensor: 1].floor-divide([tensor: 0]) raises "The argument Tensor cannot contain 0" [tensor: 4.23].floor-divide([tensor: 7.65, 1.43, 0]) raises "The argument Tensor cannot contain 0" end

.max :: (x :: Tensor) -> Tensor

Returns the maximum of the Tensor and x. This is equivalent to tensor-max(self, x).

Examples:

check: [tensor: 0].max([tensor: 1]).data-now() is-roughly [list: 1] [tensor: 1, 3].max([tensor: 1]).data-now() is-roughly [list: 1, 3] [tensor: 1, 3].max([tensor: 200]).data-now() is-roughly [list: 200, 200] [tensor: 1, 3].max([tensor: 5, 1]).data-now() is-roughly [list: 5, 3] [tensor: 1, 3, 4].max([tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" end

.min :: (x :: Tensor) -> Tensor

Returns the minimum of the Tensor and x. This is equivalent to tensor-min(self, x).

Examples:

check: [tensor: 0].min([tensor: 1]).data-now() is-roughly [list: 0] [tensor: 1, 3].min([tensor: 1]).data-now() is-roughly [list: 1, 1] [tensor: 1, 3].min([tensor: 200]).data-now() is-roughly [list: 1, 3] [tensor: 1, 3].min([tensor: 0]).data-now() is-roughly [list: 0, 0] [tensor: 1, 3, 4].min([tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" end

.modulo :: (x :: Tensor) -> Tensor

Computes the modulo of the Tensor and x. This is equivalent to tensor-modulo(self, x).

Examples:

check: [tensor: 0].modulo([tensor: 1]).data-now() is-roughly [list: 0] [tensor: 1, 3].modulo([tensor: 1]).data-now() is-roughly [list: 0, 0] [tensor: 1, 3].modulo([tensor: 5, 1]).data-now() is-roughly [list: 1, 0] [tensor: 1, 3, 4].modulo([tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" [tensor: 1].modulo([tensor: 0]) raises "The argument Tensor cannot contain 0" [tensor: 1].modulo([tensor: 1, 0]) raises "The argument Tensor cannot contain 0" end

.expt :: (x :: Tensor) -> Tensor

Computes the power of the Tensor to exponent. This is equivalent to tensor-expt(self, x).

Examples:

check: [tensor: 0].expt([tensor: 1]).data-now() is-roughly [list: 0] [tensor: 1, 3].expt([tensor: 1]).data-now() is-roughly [list: 1, 3] [tensor: 1, 3].expt([tensor: 4]).data-now() is-roughly [list: 1, 81] [tensor: 3, 3].expt([tensor: 5, 1]).data-now() is-roughly [list: 243, 3] [tensor: 1, 3, 4].expt([tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" end

.squared-difference :: (x :: Tensor) -> Tensor

Computes (self - x) * (self - x), element-wise. This is equivalent to squared-difference(self, x).

Examples:

check: [tensor: 0].squared-difference([tensor: 1]).data-now() is-roughly [list: 1] [tensor: 3].squared-difference([tensor: -3]).data-now() is-roughly [list: 36] [tensor: 1, 3].squared-difference([tensor: 4]).data-now() is-roughly [list: 9, 1] [tensor: 3, 3].squared-difference([tensor: 5, 1]).data-now() is-roughly [list: 4, 4] [tensor: 1, 3, 4].squared-difference([tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" end

3.28.2 Tensor Operations
3.28.2.1 Arithmetic Operations

All arithmetic operations are binary operations that accept two Tensors as arguments. If the size of any axis in either Tensor is greater than 1, the corresponding axis in the other Tensor must be the same size; otherwise, the operation raises an error.

Examples:

# Valid operations: add-tensors([tensor: 1], [tensor: 1]) add-tensors([tensor: 1, 2, 3], [tensor: 1]) add-tensors([tensor: 1, 2, 3, 4].as-2d(2, 2), [tensor: 1]) add-tensors([tensor: 1, 2], [tensor: 1, 2, 3, 4].as-2d(2, 2)) add-tensors([tensor: 1, 2].as-2d(2, 1), [tensor: 1, 2].as-2d(1, 2)) add-tensors([tensor: 1, 2, 3, 4].as-2d(2, 2), [tensor: 1, 2].as-2d(2, 1)) # Invalid operations: add-tensors([tensor: 1, 2, 3], [tensor: 1, 2]) add-tensors([tensor: 1, 2].as-2d(2, 1), [tensor: 1, 2, 3].as-2d(3, 1))

In some cases, this behavior isn’t intended, so most arithmetic operations have a "strict" counterpart that raises an error if the two input Tensors do not have the same shape.

add-tensors :: (a :: Tensor, b :: Tensor) -> Tensor

Adds two Tensors element-wise, A + B.

To assert that a and b are the same shape, use strict-add-tensors.

Examples:

check: add-tensors([tensor: 1], [tensor: 1]).data-now() is-roughly [list: 2] add-tensors([tensor: 1, 3], [tensor: 1]).data-now() is-roughly [list: 2, 4] add-tensors([tensor: 1, 3], [tensor: 5, 1]).data-now() is-roughly [list: 6, 4] add-tensors([tensor: 1, 3, 4], [tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" end

Subtracts two Tensors element-wise, A – B.

To assert that a and b are the same shape, use strict-subtract-tensors.

Examples:

check: subtract-tensors([tensor: 1], [tensor: 1]).data-now() is-roughly [list: 0] subtract-tensors([tensor: 1, 3], [tensor: 1]).data-now() is-roughly [list: 0, 2] subtract-tensors([tensor: 1, 3], [tensor: 5, 1]).data-now() is-roughly [list: -4, 2] subtract-tensors([tensor: 1, 3, 4], [tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" end

Multiplies two Tensors element-wise, A * B.

To assert that a and b are the same shape, use strict-multiply-tensors.

Examples:

check: multiply-tensors([tensor: 1], [tensor: 1]).data-now() is-roughly [list: 1] multiply-tensors([tensor: 1, 3], [tensor: 1]).data-now() is-roughly [list: 1, 3] multiply-tensors([tensor: 1, 3], [tensor: 5, 1]).data-now() is-roughly [list: 5, 3] multiply-tensors([tensor: 1, 3, 4], [tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" end

divide-tensors :: (a :: Tensor, b :: Tensor) -> Tensor

Divides two Tensors element-wise, A / B.

To assert that a and b are the same shape, use strict-divide-tensors.

Examples:

check: divide-tensors([tensor: 1], [tensor: 1]).data-now() is-roughly [list: 1] divide-tensors([tensor: 1, 3], [tensor: 1]).data-now() is-roughly [list: 1, 3] divide-tensors([tensor: 1, 3], [tensor: 5, 1]).data-now() is-roughly [list: 0.2, 3] divide-tensors([tensor: 1, 3, 4], [tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" divide-tensors([tensor: 1], [tensor: 0]) raises "The argument Tensor cannot contain 0" divide-tensors([tensor: 4.23], [tensor: 7.65, 1.43, 0, 2.31]) raises "The argument Tensor cannot contain 0" end

Divides two Tensors element-wise, A / B, with the result rounded with the floor function.

Examples:

check: floor-divide-tensors([tensor: 1], [tensor: 1]).data-now() is-roughly [list: 1] floor-divide-tensors([tensor: 1, 3], [tensor: 1]).data-now() is-roughly [list: 1, 3] floor-divide-tensors([tensor: 1, 3], [tensor: 5, 1]).data-now() is-roughly [list: 0, 3] floor-divide-tensors([tensor: 1, 3, 4], [tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" floor-divide-tensors([tensor: 1], [tensor: 0]) raises "The argument Tensor cannot contain 0" floor-divide-tensors([tensor: 4.23], [tensor: 7.65, 1.43, 0]) raises "The argument Tensor cannot contain 0" end

tensor-max :: (a :: Tensor, b :: Tensor) -> Tensor

Returns a Tensor containing the maximum of a and b, element-wise.

To assert that a and b are the same shape, use strict-tensor-max.

Examples:

check: tensor-max([tensor: 0], [tensor: 1]).data-now() is-roughly [list: 1] tensor-max([tensor: 1, 3], [tensor: 1]).data-now() is-roughly [list: 1, 3] tensor-max([tensor: 1, 3], [tensor: 200]).data-now() is-roughly [list: 200, 200] tensor-max([tensor: 1, 3], [tensor: 5, 1]).data-now() is-roughly [list: 5, 3] tensor-max([tensor: 1, 3, 4], [tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" end

tensor-min :: (a :: Tensor, b :: Tensor) -> Tensor

Returns a Tensor containing the minimum of a and b, element-wise.

To assert that a and b are the same shape, use strict-tensor-min.

Examples:

check: tensor-min([tensor: 0], [tensor: 1]).data-now() is-roughly [list: 0] tensor-min([tensor: 1, 3], [tensor: 1]).data-now() is-roughly [list: 1, 1] tensor-min([tensor: 1, 3], [tensor: 200]).data-now() is-roughly [list: 1, 3] tensor-min([tensor: 1, 3], [tensor: 0]).data-now() is-roughly [list: 0, 0] tensor-min([tensor: 1, 3], [tensor: 5, 1]).data-now() is-roughly [list: 1, 1] tensor-min([tensor: 1, 3, 4], [tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" end

tensor-modulo :: (a :: Tensor, b :: Tensor) -> Tensor

Computes the modulo of a and b, element-wise.

To assert that a and b are the same shape, use strict-tensor-modulo.

Examples:

check: tensor-modulo([tensor: 0], [tensor: 1]).data-now() is-roughly [list: 0] tensor-modulo([tensor: 1, 3], [tensor: 1]).data-now() is-roughly [list: 0, 0] tensor-modulo([tensor: 1, 3], [tensor: 200]).data-now() is-roughly [list: 1, 3] tensor-modulo([tensor: 1, 3], [tensor: 5, 1]).data-now() is-roughly [list: 1, 0] tensor-modulo([tensor: 1, 3, 4], [tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" end

tensor-expt :: (base :: Tensor, exponent :: Tensor) -> Tensor

Computes the power of base to exponent, element-wise.

To ensure that a and b are the same shape, use strict-tensor-expt.

Examples:

check: tensor-expt([tensor: 0], [tensor: 1]).data-now() is-roughly [list: 0] tensor-expt([tensor: 3], [tensor: -3]).data-now() is-roughly [list: 0.03703703] tensor-expt([tensor: 1, 3], [tensor: 4]).data-now() is-roughly [list: 1, 81] tensor-expt([tensor: 3, 3], [tensor: 5, 1]).data-now() is-roughly [list: 243, 3] tensor-expt([tensor: 1, 3, 4], [tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" end

Computes (a - b) * (a - b), element-wise.

To assert that a and b are the same shape, use strict-squared-difference.

Examples:

check: squared-difference([tensor: 0], [tensor: 1]).data-now() is-roughly [list: 1] squared-difference([tensor: 3], [tensor: -3]).data-now() is-roughly [list: 36] squared-difference([tensor: 1, 3], [tensor: 4]).data-now() is-roughly [list: 9, 1] squared-difference([tensor: 3, 3], [tensor: 5, 1]).data-now() is-roughly [list: 4, 4] squared-difference([tensor: 1, 3, 4], [tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" end

Same as add-tensors, but raises an error if a and b are not the same shape (as determined by .shape).

Examples:

check: strict-add-tensors([tensor: 1], [tensor: 0]) is-roughly add-tensors([tensor: 1], [tensor: 0]) strict-add-tensors([tensor: -4, -1], [tensor: -8, -2]) is-roughly add-tensors([tensor: -4, -1], [tensor: -8, -2]) strict-add-tensors([tensor: 1], [tensor: 1, 2]) raises "The first tensor does not have the same shape as the second tensor" strict-add-tensors([tensor: 8, 0].as-2d(2, 1), [tensor: 3, 1]) raises "The first tensor does not have the same shape as the second tensor" end

Same as subtract-tensors, but raises an error if a and b are not the same shape (as determined by .shape).

Examples:

check: strict-subtract-tensors([tensor: 1], [tensor: 0]) is-roughly subtract-tensors([tensor: 1], [tensor: 0]) strict-subtract-tensors([tensor: -4, -1], [tensor: -8, -2]) is-roughly subtract-tensors([tensor: -4, -1], [tensor: -8, -2]) strict-subtract-tensors([tensor: 1], [tensor: 1, 2]) raises "The first tensor does not have the same shape as the second tensor" strict-subtract-tensors([tensor: 8, 0].as-2d(2, 1), [tensor: 3, 1]) raises "The first tensor does not have the same shape as the second tensor" end

Same as multiply-tensors, but raises an error if a and b are not the same shape (as determined by .shape).

Examples:

check: strict-multiply-tensors([tensor: 1], [tensor: 0]) is-roughly multiply-tensors([tensor: 1], [tensor: 0]) strict-multiply-tensors([tensor: -4, -1], [tensor: -8, -2]) is-roughly multiply-tensors([tensor: -4, -1], [tensor: -8, -2]) strict-multiply-tensors([tensor: 1], [tensor: 1, 2]) raises "The first tensor does not have the same shape as the second tensor" strict-multiply-tensors([tensor: 8, 0].as-2d(2, 1), [tensor: 3, 1]) raises "The first tensor does not have the same shape as the second tensor" end

Same as divide-tensors, but raises an error if a and b are not the same shape (as determined by .shape).

Examples:

check: strict-divide-tensors([tensor: 1], [tensor: 0]) is-roughly divide-tensors([tensor: 1], [tensor: 0]) strict-divide-tensors([tensor: -4, -1], [tensor: -8, -2]) is-roughly divide-tensors([tensor: -4, -1], [tensor: -8, -2]) strict-divide-tensors([tensor: 1], [tensor: 1, 2]) raises "The first tensor does not have the same shape as the second tensor" strict-divide-tensors([tensor: 8, 0].as-2d(2, 1), [tensor: 3, 1]) raises "The first tensor does not have the same shape as the second tensor" strict-divide-tensors([tensor: 1], [tensor: 0]) raises "The argument Tensor cannot contain 0" strict-divide-tensors([tensor: 1, 1], [tensor: 1, 0]) raises "The argument Tensor cannot contain 0" end

Same as tensor-max, but raises an error if a and b are not the same shape (as determined by .shape).

Examples:

check: strict-tensor-max([tensor: 1], [tensor: 0]) is-roughly tensor-max([tensor: 1], [tensor: 0]) strict-tensor-max([tensor: -4, -1], [tensor: -8, -2]) is-roughly tensor-max([tensor: -4, -1], [tensor: -8, -2]) strict-tensor-max([tensor: 1], [tensor: 1, 2]) raises "The first tensor does not have the same shape as the second tensor" strict-tensor-max([tensor: 8, 0].as-2d(2, 1), [tensor: 3, 1]) raises "The first tensor does not have the same shape as the second tensor" end

Same as tensor-min, but raises an error if a and b are not the same shape (as determined by .shape).

Examples:

check: strict-tensor-min([tensor: 1], [tensor: 0]) is-roughly tensor-min([tensor: 1], [tensor: 0]) strict-tensor-min([tensor: -4, -1], [tensor: -8, -2]) is-roughly tensor-min([tensor: -4, -1], [tensor: -8, -2]) strict-tensor-min([tensor: 1], [tensor: 1, 2]) raises "The first tensor does not have the same shape as the second tensor" strict-tensor-min([tensor: 8, 0].as-2d(2, 1), [tensor: 3, 1]) raises "The first tensor does not have the same shape as the second tensor" end

Same as tensor-expt, but raises an error if a and b are not the same shape (as determined by .shape).

Examples:

check: strict-tensor-expt([tensor: 1], [tensor: 0]) is-roughly tensor-expt([tensor: 1], [tensor: 0]) strict-tensor-expt([tensor: -4, -1], [tensor: -8, -2]) is-roughly tensor-expt([tensor: -4, -1], [tensor: -8, -2]) strict-tensor-expt([tensor: 1], [tensor: 1, 2]) raises "The first tensor does not have the same shape as the second tensor" strict-tensor-expt([tensor: 8, 0].as-2d(2, 1), [tensor: 3, 1]) raises "The first tensor does not have the same shape as the second tensor" end

Same as tensor-modulo, but raises an error if a and b are not the same shape (as determined by .shape).

Examples:

check: strict-tensor-modulo([tensor: 1], [tensor: 0]) is-roughly tensor-modulo([tensor: 1], [tensor: 0]) strict-tensor-modulo([tensor: -4, -1], [tensor: -8, -2]) is-roughly tensor-modulo([tensor: -4, -1], [tensor: -8, -2]) strict-tensor-modulo([tensor: 1], [tensor: 1, 2]) raises "The first tensor does not have the same shape as the second tensor" strict-tensor-modulo([tensor: 8, 0].as-2d(2, 1), [tensor: 3, 1]) raises "The first tensor does not have the same shape as the second tensor" strict-tensor-modulo([tensor: 1], [tensor: 0]) raises "The argument Tensor cannot contain 0" strict-tensor-modulo([tensor: 1, 1], [tensor: 1, 0]) raises "The argument Tensor cannot contain 0" end

Same as squared-difference, but raises an error if a and b are not the same shape (as determined by .shape).

Examples:

check: strict-squared-difference([tensor: 1], [tensor: 0]) is-roughly squared-difference([tensor: 1], [tensor: 0]) strict-squared-difference([tensor: -4, -1], [tensor: -8, -2]) is-roughly squared-difference([tensor: -4, -1], [tensor: -8, -2]) strict-squared-difference([tensor: 1], [tensor: 1, 2]) raises "The first tensor does not have the same shape as the second tensor" strict-squared-difference([tensor: 8, 0].as-2d(2, 1), [tensor: 3, 1]) raises "The first tensor does not have the same shape as the second tensor" end

3.28.2.2 Trigonometry Operations

tensor-acos :: (tensor :: Tensor) -> Tensor

Computes the inverse cosine of the Tensor, element-wise.

All of the values in the input Tensor must be between -1 and 1, inclusive; otherwise, the function raises an error.

Examples:

check: tensor-acos([tensor: 1]).data-now() is-roughly [list: 0] tensor-acos([tensor: 0]).data-now() is-roughly [list: ~1.5707963] tensor-acos([tensor: -1]).data-now() is-roughly [list: ~3.1415927] tensor-acos([tensor: 0.5, 0.2, 0.6]).data-now() is-roughly [list: ~1.0471975, ~1.3694384, ~0.9272952] tensor-acos([tensor: 10]) raises "Values in the input Tensor must be between -1 and 1, inclusive" tensor-acos([tensor: -1, 0, 16, -2]) raises "Values in the input Tensor must be between -1 and 1, inclusive" end

tensor-acosh :: (tensor :: Tensor) -> Tensor

Computes the inverse hyperbolic cosine of the Tensor, element-wise.

All of the values in the input Tensor must be greater than or equal to 1; otherwise, the function raises an error.

Examples:

check: tensor-acosh([tensor: 1]).data-now() is-roughly [list: 0] tensor-acosh([tensor: 2]).data-now() is-roughly [list: ~1.3169579] tensor-acosh([tensor: 1, 5, 10, 200]).data-now() is-roughly [list: ~0, ~2.2924315, ~2.9932229, ~5.9914584] tensor-acosh([tensor: 0]) raises "Values in the input Tensor must be at least 1" tensor-acosh([tensor: 4, 1, 10, 32, -2, 82]) raises "Values in the input Tensor must be at least 1" end

tensor-asin :: (tensor :: Tensor) -> Tensor

Computes the inverse sine of the Tensor, element-wise.

All of the values in the input Tensor must be between -1 and 1, inclusive; otherwise, the function raises an error.

Examples:

check: # Check one-dimensional usages: tensor-asin([tensor: 1]).data-now() is-roughly [list: ~1.5707963] tensor-asin([tensor: 0]).data-now() is-roughly [list: 0] tensor-asin([tensor: -0.5]).data-now() is-roughly [list: ~-0.5235987] tensor-asin([tensor: 0.5, 0.2, 0.6]).data-now() is-roughly [list: ~0.5235987, ~0.2013579, ~0.6435011] # Check bounding values: tensor-asin([tensor: 9]) raises "Values in the input Tensor must be between -1 and 1, inclusive" tensor-asin([tensor: -1, -2, -3]) raises "Values in the input Tensor must be between -1 and 1, inclusive" end

tensor-asinh :: (tensor :: Tensor) -> Tensor

Computes the inverse hyperbolic sine of the Tensor, element-wise.

Examples:

check: tensor-asinh([tensor: 0]).data-now() is-roughly [list: 0] tensor-asinh([tensor: 1]).data-now() is-roughly [list: ~0.8813736] tensor-asinh([tensor: -1, -2, -3]).data-now() is-roughly [list: ~-0.8813736, ~-1.4436353, ~-1.8184462] tensor-asinh([tensor: 21, 0, 32, 2]).data-now() is-roughly [list: ~3.7382359, ~0, ~4.1591272, ~1.4436354] end

tensor-atan :: (tensor :: Tensor) -> Tensor

Computes the inverse tangent of the Tensor, element-wise.

Examples:

check: tensor-atan([tensor: 0]).data-now() is-roughly [list: 0] tensor-atan([tensor: 1]).data-now() is-roughly [list: ~0.7853981] tensor-atan([tensor: -1]).data-now() is-roughly [list: ~-0.7853981] tensor-atan([tensor: -1, -2, -3]).data-now() is-roughly [list: ~-0.7853981, ~-1.1071487, ~-1.2490458] end

tensor-atan2 :: (a :: Tensor, b :: Tensor) -> Tensor

Computes the four-quadrant inverse tangent of a and b, element-wise.

tensor-atanh :: (tensor :: Tensor) -> Tensor

Computes the inverse hyperbolic tangent of the Tensor, element-wise.

All of the values in the input Tensor must be between -1 and 1, exclusive; otherwise, the function raises an error.

Examples:

check: # Check one-dimensional usages: tensor-atanh([tensor: 0.5]).data-now() is-roughly [list: ~0.5493061] tensor-atanh([tensor: 0]).data-now() is-roughly [list: 0] tensor-atanh([tensor: -0.9]).data-now() is-roughly [list: ~-1.4722193] tensor-atanh([tensor: 0.5, 0.2, 0.6]).data-now() is-roughly [list: ~0.5493061, ~0.2027325, ~0.6931471] # Check bounding values: tensor-atanh([tensor: 1]) raises "Values in the input Tensor must be between -1 and 1, exclusive" tensor-atanh([tensor: -1]) raises "Values in the input Tensor must be between -1 and 1, exclusive" tensor-atanh([tensor: 0, 16, -1, 9, 1]) raises "Values in the input Tensor must be between -1 and 1, exclusive" end

tensor-cos :: (tensor :: Tensor) -> Tensor

Computes the cosine of the Tensor, element-wise.

Examples:

check: tensor-cos([tensor: 0]).data-now() is-roughly [list: 1] tensor-cos([tensor: 1]).data-now() is-roughly [list: ~0.5403115] tensor-cos([tensor: -1]).data-now() is-roughly [list: ~0.5403116] tensor-cos([tensor: 6, 2, -4]).data-now() is-roughly [list: ~0.9601798, ~-0.4161523, ~-0.6536576] end

tensor-cosh :: (tensor :: Tensor) -> Tensor

Computes the hyperbolic cosine of the Tensor, element-wise.

Examples:

check: tensor-cosh([tensor: 0]).data-now() is-roughly [list: 1] tensor-cosh([tensor: 1]).data-now() is-roughly [list: ~1.5430805] tensor-cosh([tensor: -1]).data-now() is-roughly [list: ~1.5430805] tensor-cosh([tensor: -1, -2, -3]).data-now() is-roughly [list: ~1.5430805, ~3.7621955, ~10.0676612] end

tensor-sin :: (tensor :: Tensor) -> Tensor

Computes the sine of the Tensor, element-wise.

Examples:

check: tensor-sin([tensor: 0]).data-now() is-roughly [list: 0] tensor-sin([tensor: 1]).data-now() is-roughly [list: ~0.8414709] tensor-sin([tensor: -1]).data-now() is-roughly [list: ~-0.8415220] tensor-sin([tensor: 6, 2, -4]).data-now() is-roughly [list: ~-0.2794162, ~0.9092976, ~0.7568427] tensor-sin([tensor: 21, 0, 32, 2]).data-now() is-roughly [list: ~0.8366656, ~0, ~0.5514304, ~0.9092976] end

tensor-sinh :: (tensor :: Tensor) -> Tensor

Computes the hyperbolic sine of the Tensor, element-wise.

Examples:

check: tensor-sinh([tensor: 0]).data-now() is-roughly [list: 0] tensor-sinh([tensor: 1]).data-now() is-roughly [list: ~1.1752011] tensor-sinh([tensor: -1]).data-now() is-roughly [list: ~-1.1752011] tensor-sinh([tensor: -1, -2, -3]).data-now() is-roughly [list: ~-1.1752011, ~-3.6268603, ~-10.0178737] tensor-sinh([tensor: 6, 2, -4]).data-now() is-roughly [list: ~201.7131195, ~3.6268601, ~-27.2899169] end

tensor-tan :: (tensor :: Tensor) -> Tensor

Computes the tangent of the Tensor, element-wise.

Examples:

check: tensor-tan([tensor: 0]).data-now() is-roughly [list: 0] tensor-tan([tensor: 1]).data-now() is-roughly [list: ~1.5573809] tensor-tan([tensor: -1]).data-now() is-roughly [list: ~-1.5573809] tensor-tan([tensor: 21, 0, 32, 2]).data-now() is-roughly [list: ~-1.5275151, ~0, ~0.6610110, ~-2.1850113] end

tensor-tanh :: (tensor :: Tensor) -> Tensor

Computes the hyperbolic tangent of the Tensor, element-wise.

Examples:

check: tensor-tanh([tensor: 0]).data-now() is-roughly [list: 0] tensor-tanh([tensor: 1]).data-now() is-roughly [list: ~0.7615941] tensor-tanh([tensor: -1, -2, -3]).data-now() is-roughly [list: ~-0.7615941, ~-0.9640275, ~-0.9950547] tensor-tanh([tensor: 6, 2, -4]).data-now() is-roughly [list: ~0.9999876, ~0.9640275, ~-0.9993293] end

3.28.2.3 Math Operations

tensor-abs :: (tensor :: Tensor) -> Tensor

Computes the absolute value of the Tensor, element-wise.

Examples:

check: tensor-abs([tensor: 0]).data-now() is-roughly [list: 0] tensor-abs([tensor: 1]).data-now() is-roughly [list: 1] tensor-abs([tensor: -1]).data-now() is-roughly [list: 1] tensor-abs([tensor: -1, -2, -3]).data-now() is-roughly [list: 1, 2, 3] two-dim-abs = tensor-abs([tensor: -4, 5, -6, -7, -8, 9].as-2d(3, 2)) two-dim-abs.shape() is [list: 3, 2] two-dim-abs.data-now() is-roughly [list: 4, 5, 6, 7, 8, 9] end

tensor-ceil :: (tensor :: Tensor) -> Tensor

Computes the ceiling of the Tensor, element-wise.

Examples:

check: # Check usages on integer tensors: tensor-ceil([tensor: 0]).data-now() is-roughly [list: 0] tensor-ceil([tensor: 1]).data-now() is-roughly [list: 1] tensor-ceil([tensor: -1, -2, -3]).data-now() is-roughly [list: -1, -2, -3] # Check usages on float tensors: tensor-ceil([tensor: 0.3]).data-now() is-roughly [list: 1] tensor-ceil([tensor: 0.5]).data-now() is-roughly [list: 1] tensor-ceil([tensor: 0.8]).data-now() is-roughly [list: 1] tensor-ceil([tensor: -0.2]).data-now() is-roughly [list: 0] tensor-ceil([tensor: -0.5]).data-now() is-roughly [list: 0] tensor-ceil([tensor: -0.9]).data-now() is-roughly [list: 0] tensor-ceil([tensor: 3.5, 5.2, 1.6]).data-now() is-roughly [list: 4, 6, 2] end

clip-by-value :: (
tensor :: Tensor,
min-value :: Number,
max-value :: Number
)
-> Tensor

Clips the values of the Tensor, element-wise, such that every element in the resulting Tensor is at least min-value and is at most max-value.

min-value must be less than or equal to max-value; otherwise, the function raises an error.

Examples:

check: clip-by-value([tensor: 0], 0, 0).data-now() is-roughly [list: 0] clip-by-value([tensor: 0], -1, 1).data-now() is-roughly [list: 0] clip-by-value([tensor: 0], 1, 4).data-now() is-roughly [list: 1] clip-by-value([tensor: 21, 0, 32, 2], 4, 9).data-now() is-roughly [list: 9, 4, 9, 4] clip-by-value([tensor: 3, 9, 10, 3.24], 4.5, 9.4).data-now() is-roughly [list: 4.5, 9, 9.4, 4.5] clip-by-value([tensor: 1], 10, 0) raises "minimum value to clip to must be less than or equal to the maximum" clip-by-value([tensor: 1], -10, -45) raises "minimum value to clip to must be less than or equal to the maximum" end

Applies the exponential linear units function to the Tensor, element-wise.

elu :: (tensor :: Tensor) -> Tensor

Alias for exponential-linear-units.

gauss-error :: (tensor :: Tensor) -> Tensor

Applies the gauss error function to the Tensor, element-wise.

erf :: (tensor :: Tensor) -> Tensor

Alias for gauss-error.

tensor-exp :: (tensor :: Tensor) -> Tensor

Computes the equivalent of num-exp(tensor), element-wise.

tensor-exp-min1 :: (tensor :: Tensor) -> Tensor

Computes the equivalent of num-exp(tensor - 1), element-wise.

tensor-floor :: (tensor :: Tensor) -> Tensor

Computes the floor of the Tensor, element-wise.

Examples:

check: # Check usages on integer tensors: tensor-floor([tensor: 0]).data-now() is-roughly [list: 0] tensor-floor([tensor: 1]).data-now() is-roughly [list: 1] tensor-floor([tensor: -1]).data-now() is-roughly [list: -1] tensor-floor([tensor: -1, -2, -3]).data-now() is-roughly [list: -1, -2, -3] # Check usages on float tensors: tensor-floor([tensor: 0.3]).data-now() is-roughly [list: 0] tensor-floor([tensor: 0.5]).data-now() is-roughly [list: 0] tensor-floor([tensor: 0.8]).data-now() is-roughly [list: 0] tensor-floor([tensor: 0.999]).data-now() is-roughly [list: 0] tensor-floor([tensor: 1.1]).data-now() is-roughly [list: 1] tensor-floor([tensor: -0.2]).data-now() is-roughly [list: -1] tensor-floor([tensor: -0.5]).data-now() is-roughly [list: -1] tensor-floor([tensor: -0.9]).data-now() is-roughly [list: -1] tensor-floor([tensor: 3.5, 5.2, 1.6]).data-now() is-roughly [list: 3, 5, 1] end

leaky-relu :: (tensor :: Tensor, alpha :: Number) -> Tensor

Applies a leaky rectified linear units function to the Tensor, element-wise.

alpha is the scaling factor for negative values. The default in TensorFlow.js is 0.2, but the argument has been exposed here for more flexibility.

tensor-log :: (tensor :: Tensor) -> Tensor

Computes the natural logarithm of the Tensor, element-wise; that is, it computes the equivalent of num-log(tensor).

tensor-log-plus1 :: (tensor :: Tensor) -> Tensor

Computes the natural logarithm of the Tensor plus 1, element-wise; that is, it computes the equivalent of num-log(tensor + 1).

log-sigmoid :: (tensor :: Tensor) -> Tensor

Applies the log sigmoid function to the Tensor, element-wise.

tensor-negate :: (tensor :: Tensor) -> Tensor

Multiplies each element in the Tensor by -1.

Examples:

check: tensor-negate([tensor: 0]).data-now() is-roughly [list: 0] tensor-negate([tensor: 1]).data-now() is-roughly [list: -1] tensor-negate([tensor: -1]).data-now() is-roughly [list: 1] tensor-negate([tensor: -1, 2, 3, -4, 5]).data-now() is-roughly [list: 1, -2, -3, 4, -5] tensor-negate([tensor: -1, -2, -3, -4, -5]).data-now() is-roughly [list: 1, 2, 3, 4, 5] end

parametric-relu :: (tensor :: Tensor, alpha :: Number) -> Tensor

Applies a leaky rectified linear units function to the Tensor, element-wise, using parametric alphas.

alpha is the scaling factor for negative values.

tensor-reciprocal :: (tensor :: Tensor) -> Tensor

Computes the reciprocal of the Tensor, element-wise; that is, it computes the equivalent of 1 / tensor.

In order to avoid division-by-zero errors, the input Tensor cannot contain 0; otherwise, the function raises an error.

Examples:

check: tensor-reciprocal([tensor: 1]).data-now() is-roughly [list: 1] tensor-reciprocal([tensor: -1]).data-now() is-roughly [list: -1] tensor-reciprocal([tensor: -1, -2, -3]).data-now() is-roughly [list: ~-1, ~-0.5, ~-0.3333333] # Check for division-by-zero errors: tensor-reciprocal([tensor: 0]) raises "The argument Tensor cannot contain 0" tensor-reciprocal([tensor: 1, 0]) raises "The argument Tensor cannot contain 0" tensor-reciprocal([tensor: 7.65, 0, 1.43]) raises "The argument Tensor cannot contain 0" end

relu :: (tensor :: Tensor) -> Tensor

Applies a rectified linear units function to the Tensor, element-wise.

tensor-round :: (tensor :: Tensor) -> Tensor

Computes the equivalent of num-round(tensor), element-wise.

Due to unavoidable precision errors on Roughnums, the behavior for numbers ending in .5 is inconsistent. See the examples below.

Examples:

check: tensor-round([tensor: 0]).data-now() is-roughly [list: 0] tensor-round([tensor: 1]).data-now() is-roughly [list: 1] tensor-round([tensor: -1]).data-now() is-roughly [list: -1] tensor-round([tensor: 0.1]).data-now() is-roughly [list: 0] tensor-round([tensor: 0.3]).data-now() is-roughly [list: 0] tensor-round([tensor: 0.8]).data-now() is-roughly [list: 1] tensor-round([tensor: 0.999]).data-now() is-roughly [list: 1] tensor-round([tensor: -1, -2, -3]).data-now() is-roughly [list: -1, -2, -3] tensor-round([tensor: 3.5, 5.2, 1.6]).data-now() is-roughly [list: 4, 5, 2] # Note inconsistent behavior with rounding on Roughnums: tensor-round([tensor: 0.5]).data-now() is-roughly [list: 0] # rounds down tensor-round([tensor: 3.5]).data-now() is-roughly [list: 4] # rounds up end

reciprocal-sqrt :: (tensor :: Tensor) -> Tensor

Computes the recriprocal of the square root of the Tensor, element-wise.

The resulting Tensor is roughly equivalent to tensor-reciprocal(tensor-sqrt(tensor)).

In order to avoid division-by-zero errors, the input Tensor cannot contain 0; otherwise, the function raises an error.

Examples:

check: reciprocal-sqrt([tensor: 1]).data-now() is-roughly [list: 1] reciprocal-sqrt([tensor: -1]).data-now() is-roughly [list: 1] reciprocal-sqrt([tensor: -1, -2, -3]).data-now() is-roughly [list: ~1, ~0.7071067, ~0.5773502] reciprocal-sqrt([tensor: 6, 2, -4]).data-now() is-roughly [list: ~0.4082482, ~0.7071067, ~0.5] # Check for division-by-zero errors: reciprocal-sqrt([tensor: 0]) raises "The argument Tensor cannot contain 0" reciprocal-sqrt([tensor: 1, 0]) raises "The argument Tensor cannot contain 0" reciprocal-sqrt([tensor: 7.65, 0, 1.43]) raises "The argument Tensor cannot contain 0" end

scaled-elu :: (tensor :: Tensor) -> Tensor

Applies a scaled, exponential linear units function to the Tensor, element-wise.

sigmoid :: (tensor :: Tensor) -> Tensor

Applies the sigmoid function to the Tensor, element-wise.

signed-ones :: (tensor :: Tensor) -> Tensor

Returns an element-wise indication of the sign of each number in the Tensor; that is, every value in the original tensor is represented in the resulting tensor as ~+1 if the value is positive, ~-1 if the value was negative, or ~0 if the value was zero or not a number.

Examples:

check: signed-ones([tensor: 0]).data-now() is-roughly [list: 0] signed-ones([tensor: 1]).data-now() is-roughly [list: 1] signed-ones([tensor: 3]).data-now() is-roughly [list: 1] signed-ones([tensor: -1]).data-now() is-roughly [list: -1] signed-ones([tensor: -5]).data-now() is-roughly [list: -1] signed-ones([tensor: 9, -7, 5, -3, -1, 0]).data-now() is-roughly [list: 1, -1, 1, -1, -1, 0] end

softplus :: (tensor :: Tensor) -> Tensor

Applies the softplus function to the Tensor, element-wise.

See https://sefiks.com/2017/08/11/softplus-as-a-neural-networks-activation-function/ for more information.

tensor-sqrt :: (tensor :: Tensor) -> Tensor

Computes the square root of the Tensor, element-wise.

All of the values in the input Tensor must be greater than or equal to 0; otherwise, the function raises an error.

Examples:

check: tensor-sqrt([tensor: 0]).data-now() is-roughly [list: 0] tensor-sqrt([tensor: 1]).data-now() is-roughly [list: 1] tensor-sqrt([tensor: 4]).data-now() is-roughly [list: 2] tensor-sqrt([tensor: 9]).data-now() is-roughly [list: 3] tensor-sqrt([tensor: 25]).data-now() is-roughly [list: 5] tensor-sqrt([tensor: -1]).data-now() raises "Values in the input Tensor must be at least 0" tensor-sqrt([tensor: 9, -7, 5, -3, -1, 0, 0.5]).data-now() raises "Values in the input Tensor must be at least 0" end

tensor-square :: (tensor :: Tensor) -> Tensor

Computes the square of the Tensor, element-wise.

Examples:

check: tensor-square([tensor: 0]).data-now() is-roughly [list: 0] tensor-square([tensor: 1]).data-now() is-roughly [list: 1] tensor-square([tensor: 5]).data-now() is-roughly [list: 25] tensor-square([tensor: -1]).data-now() is-roughly [list: 1] tensor-square([tensor: -3]).data-now() is-roughly [list: 9] tensor-square([tensor: 9, -7, 5, -3, -1, 0, 0.5]).data-now() is-roughly [list: 81, 49, 25, 9, 1, 0, 0.25] end

step :: (tensor :: Tensor) -> Tensor

Applies the unit step function to the Tensor, element-wise; that is, every value in the original tensor is represented in the resulting tensor as ~+1 if the value is positive; otherwise, it is represented as ~0.

Examples:

check: step([tensor: 0]).data-now() is-roughly [list: 0] step([tensor: 1]).data-now() is-roughly [list: 1] step([tensor: 5]).data-now() is-roughly [list: 1] step([tensor: -1]).data-now() is-roughly [list: 0] step([tensor: -3]).data-now() is-roughly [list: 0] step([tensor: -1, 4, 0, 0, 15, -43, 0]).data-now() is-roughly [list: 0, 1, 0, 0, 1, 0, 0] end

3.28.2.4 Reduction Operations

arg-max :: (tensor :: Tensor, axis :: Option<Number>) -> Tensor

Returns a new Tensor where each element is the index of the maximum values along the outermost dimension of tensor.

arg-min :: (tensor :: Tensor, axis :: Option<Number>) -> Tensor

Returns a new Tensor where each element is the index of the minimum values along the outermost dimension of tensor.

log-sum-exp :: (tensor :: Tensor, axis :: Option<Number>) -> Tensor

Computes log(sum(exp(elements along the outermost dimension)).

Reduces tensor along the outermost dimension.

reduce-all :: (tensor :: Tensor, axis :: Option<Number>) -> Tensor

Reduces the input Tensor across all dimensions by computing the logical "and" of its elements.

tensor must be of type "bool"; otherwise, the function raises an error.

reduce-any :: (tensor :: Tensor, axis :: Option<Number>) -> Tensor

Reduces the input Tensor across all dimensions by computing the logical "or" of its elements.

tensor must be of type "bool"; otherwise, the function raises an error.

reduce-max :: (tensor :: Tensor, axis :: Option<Number>) -> Tensor

Returns a Tensor containing a single value that is the maximum value of all entries in tensor.

reduce-min :: (tensor :: Tensor, axis :: Option<Number>) -> Tensor

Returns a Tensor containing a single value that is the minimum value of all entries in tensor.

reduce-mean :: (tensor :: Tensor, axis :: Option<Number>) -> Tensor

Returns a Tensor containing a single value that is the mean value of all entries in tensor.

reduce-sum :: (tensor :: Tensor, axis :: Option<Number>) -> Tensor

Returns a Tensor containing a single value that is the sum of all entries in tensor.

3.28.2.5 Slicing and Joining Operations

concatenate :: (tensors :: List<Tensor>, axis :: NumInteger) -> Tensor

Concatenates each Tensor in tensors along the given axis.

The Tensors’ ranks and types must match, and their sizes must match in all dimensions except axis.

Examples:

check: concatenate([list: [tensor: 1], [tensor: 2]], 0).data-now() is-roughly [list: 1, 2] concatenate([list: [tensor: 1, 2, 3], [tensor: 4, 5, 6]], 0).data-now() is-roughly [list: 1, 2, 3, 4, 5, 6] two-dim-1 = [tensor: 1, 2, 3, 4].as-2d(2, 2) two-dim-2 = [tensor: 5, 6, 7, 8].as-2d(2, 2) concatenate([list: two-dim-1, two-dim-2], 0).data-now() is-roughly [list: 1, 2, 3, 4, 5, 6, 7, 8] concatenate([list: two-dim-1, two-dim-2], 1).data-now() is-roughly [list: 1, 2, 5, 6, 3, 4, 7, 8] end

gather :: (
tensor :: Tensor,
indices :: Tensor,
axis :: NumInteger
)
-> Tensor

Gathers slices from the Tensor at every index in indices along the given axis.

Examples:

check: input-1 = [tensor: 1, 2, 3, 4] indices-1 = [tensor: 1, 3, 3] gather(input-1, indices-1, none).data-now() is [list: 2, 4, 4] input-2 = [tensor: 1, 2, 3, 4].as-2d(2, 2) indices-2 = [tensor: 1, 1, 0] gather(input-2, indices-2, none).data-now() is [list: 3, 4, 3, 4, 1, 2] end

reverse :: (tensor :: Tensor, axes :: Option<List<NumInteger>>) -> Tensor

Reverses the values in tensor along the specified axis.

If axes is none, the function defaults to reversing along all axes.

Examples:

check: reverse([tensor: 0], none).data-now() is-roughly [list: 0] reverse([tensor: 1, 2], none).data-now() is-roughly [list: 2, 1] reverse([tensor: 1, 2, 3, 4, 5], none).data-now() is-roughly [list: 5, 4, 3, 2, 1] two-dim = [tensor: 1, 2, 3, 4, 5, 6].as-2d(3, 2) reverse(two-dim, none).data-now() is-roughly [list: 6, 5, 4, 3, 2, 1] reverse(two-dim, some([list: 0])).data-now() is-roughly [list: 5, 6, 3, 4, 1, 2] reverse(two-dim, some([list: 1])).data-now() is-roughly [list: 2, 1, 4, 3, 6, 5] end

slice :: (
tensor :: Tensor,
begin :: List<NumInteger>,
size :: Option<List<NumInteger>>
)
-> Tensor

Extracts a slice from tensor starting at the coordinates represented by begin. The resulting slice is of size size.

A value of -1 in size means that the resulting slice will go all the way to the end of the dimensions in the respective axis.

If the length of size is less than the rank of in tensor, the size of the rest of the axes will be implicitly set to -1. If size is none, the size of all axes will be set to -1.

Examples:

check: slice([tensor: 1], [list: 0], none).data-now() is-roughly [list: 1] slice([tensor: 1, 2, 3, 4, 5], [list: 2], none).data-now() is-roughly [list: 3, 4, 5] two-dim = [tensor: 1, 2, 3, 4, 5, 6].as-2d(3, 2) slice(two-dim, [list: 2, 1], none).data-now() is-roughly [list: 6] slice(two-dim, [list: 1, 0], none).data-now() is-roughly [list: 3, 4, 5, 6] slice(two-dim, [list: 2], none) raises "number of coordinates to start the slice at must be equal to the rank" slice(two-dim, [list: 1, 0], some([list: 2, 1])).data-now() is-roughly [list: 3, 5] slice(two-dim, [list: 1, 0], some([list: 1, 2])).data-now() is-roughly [list: 3, 4] slice(two-dim, [list: 1, 0], some([list: 1])) raises "dimensions for the size of the slice at must be equal to the rank" end

split :: (
tensor :: Tensor,
split-sizes :: List<NumInteger>,
axis :: NumInteger
)
-> List<Tensor>

Splits tensor into sub-Tensors along the specified axis.

split-sizes represents the sizes of each output Tensor along the axis. The sum of the sizes in split-sizes must be equal to tensor.shape().get-value(axis); otherwise, an error will be raised.

Examples:

check: one-dim = split([tensor: 1, 2, 3, 4], [list: 1, 1, 2], 0) one-dim.length() is 3 one-dim.get(0).data-now() is-roughly [list: 1] one-dim.get(1).data-now() is-roughly [list: 2] one-dim.get(2).data-now() is-roughly [list: 3, 4] split([tensor: 1, 2, 3, 4], [list: 1], 0) raises "sum of split sizes must match the size of the dimension" split([tensor: 1, 2, 3, 4], [list: 1, 1, 1, 1, 1], 0) raises "sum of split sizes must match the size of the dimension" end

stack :: (tensors :: List<Tensor>, axis :: NumInteger) -> Tensor

Stacks a list of rank-R Tensors into one rank-(R + 1) Tensor along the specified axis.

Every Tensor in tensors must have the same shape and data type; otherwise, the function raises an error.

If axis is none, the operation will split along the first dimension (axis 0) by default.

Examples:

check: stack([list: [tensor: 1]], 0).data-now() is-roughly [list: 1] stack([list: [tensor: 1], [tensor: 2]], 0).data-now() is-roughly [list: 1, 2] stack([list: [tensor: 1, 2], [tensor: 3, 4], [tensor: 5, 6]], 0).data-now() is-roughly [list: 1, 2, 3, 4, 5, 6] stack(empty, 0).data-now() raises "At least one Tensor must be supplied" stack([list: [tensor: 1]], 1) raises "Axis must be within the bounds of the Tensor" stack([list: [tensor: 1], [tensor: 2, 3], [tensor: 4]], 0) raises "All tensors passed to `stack` must have matching shapes" end

tile :: (tensor :: Tensor, repetitions :: List<Number>) -> Tensor

Constructs a new Tensor by repeating tensor the number of times given by repetitions. Each number in repetitions represents the number of replications in each dimension; that is, the first element in the list represents the number of replications along the first dimension, and so on.

unstack :: (tensor :: Tensor, axis :: NumInteger) -> List<Tensor>

Unstacks a Tensor of rank-R into a List of rank-(R - 1) Tensors along the specified axis.

If axis is none, the operation will split along the first dimension (axis 0) by default.

Examples:

check: unstack([tensor: 1], 0).map({(x): x.data-now()}) is-roughly [list: [list: 1]] unstack([tensor: 1, 2], 0).map({(x): x.data-now()}) is-roughly [list: [list: 1], [list: 2]] unstack([tensor: 1, 2, 3, 4], 0).map({(x): x.data-now()}) is-roughly [list: [list: 1], [list: 2], [list: 3], [list: 4]] unstack([tensor: 1].as-scalar(), 0) raises "Tensor to be unstacked must be at least rank-1, but was rank-0" unstack([tensor: 1, 2, 3, 4], 1) raises "axis at which to unstack the Tensor must be within the bounds" end

strided-slice :: (
tensor :: Tensor,
begin :: List<NumInteger>,
end :: List<NumInteger>,
strides :: List<Number>
)
-> Tensor

Extracts a strided slice of a Tensor.

Roughly speaking, this operations extracts a slice of size (end - begin) / stride from tensor. Starting at the location specified by begin, the slice continues by adding stride to the index until all dimensions are not less than end. Note that a stride can be negative, which causes a reverse slice.

3.28.3 TensorBuffers

TensorBuffers are mutable objects that allow users to set values at specific locations before converting the buffer into an immutable Tensor.

is-tensor-buffer :: (shape :: Any) -> Any

Returns true if val is a TensorBuffer; otherwise, returns false.

Examples:

check: is-tensor-buffer(make-buffer([list: 1])) is true is-tensor-buffer(make-buffer([list: 8, 4, 10])) is true is-tensor-buffer(43) is false is-tensor-buffer("not a buffer") is false is-tensor-buffer({some: "thing"}) is false end

3.28.3.1 TensorBuffer Constructors

Creates an TensorBuffer with the specified shape. The returned TensorBuffer’s values are initialized to ~0.

Examples:

check: make-buffer([list: 1]).size() is 1 make-buffer([list: 1]).shape() is [list: 1] make-buffer([list: 9, 5]).size() is 45 make-buffer([list: 9, 5]).shape() is [list: 9, 5] # Check for error handling of rank-0 shapes: make-buffer(empty) raises "input shape List had zero elements" # Check for error handling of less than zero dimension sizes: make-buffer([list: 0]) raises "Cannot create TensorBuffer" make-buffer([list: -1]) raises "Cannot create TensorBuffer" make-buffer([list: 4, 5, 0, 3]) raises "Cannot create TensorBuffer" make-buffer([list: 2, -5, -1, 4]) raises "Cannot create TensorBuffer" end

3.28.3.2 TensorBuffer Methods

.size :: () -> Number

Returns the size of the TensorBuffer (the number of values stored in the TensorBuffer).

Examples:

check: make-buffer([list: 1]).size() is 1 make-buffer([list: 4]).size() is 4 make-buffer([list: 3, 2]).size() is 6 make-buffer([list: 4, 4]).size() is 16 make-buffer([list: 4, 3, 5]).size() is 60 end

.shape :: () -> List<NumInteger>

Returns a List<NumInteger> representing the shape of the TensorBuffer. Each element in the List<NumInteger> corresponds to the size in each dimension.

Examples:

check: make-buffer([list: 1]).shape() is [list: 1] make-buffer([list: 4, 3]).shape() is [list: 4, 3] make-buffer([list: 2, 4, 1]).shape() is [list: 2, 4, 1] make-buffer([list: 4, 3, 5]).shape() is [list: 4, 3, 5] end

.set-now :: (value :: Number, indices :: List<NumInteger>) -> Nothing

Sets the value in the TensorBuffer at the specified indicies to value.

Examples:

check: test-buffer = make-buffer([list: 7]) test-buffer.set-now(-45, [list: 0]) test-buffer.set-now(9, [list: 2]) test-buffer.set-now(0, [list: 4]) test-buffer.set-now(-3.42, [list: 6]) test-buffer.get-all-now() is-roughly [list: -45, 0, 9, 0, 0, 0, -3.42] test-buffer.to-tensor().shape() is [list: 7] test-buffer.to-tensor().data-now() is-roughly [list: -45, 0, 9, 0, 0, 0, -3.42] # Check out-of-bounds coordinates: test-buffer.set-now(10, [list: -1]) raises "Coordinates must be within the bounds of the TensorBuffer's shape" test-buffer.set-now(10, [list: 8]) raises "Coordinates must be within the bounds of the TensorBuffer's shape" # Check too little coordinates: test-buffer.set-now(10, [list:]) raises "number of supplied coordinates must match the rank" # Check too many coordinates: test-buffer.set-now(10, [list: 9, 5]) raises "number of supplied coordinates must match the rank" end

.get-now :: (indices :: List<NumInteger>) -> Number

Returns the value in the TensorBuffer at the specified indicies.

Examples:

check: test-buffer = make-buffer([list: 7]) test-buffer.set-now(-45, [list: 0]) test-buffer.set-now(9, [list: 2]) test-buffer.set-now(0, [list: 4]) test-buffer.set-now((4 / 3), [list: 5]) test-buffer.set-now(-3.42, [list: 6]) test-buffer.get-now([list: 0]) is-roughly -45 test-buffer.get-now([list: 1]) is-roughly 0 test-buffer.get-now([list: 2]) is-roughly 9 test-buffer.get-now([list: 3]) is-roughly 0 test-buffer.get-now([list: 4]) is-roughly 0 test-buffer.get-now([list: 5]) is-roughly (4 / 3) test-buffer.get-now([list: 6]) is-roughly -3.42 end

.get-all-now :: () -> List<Roughnum>

Returns all values in the TensorBuffer.

Examples:

check: one-dim-buffer = make-buffer([list: 7]) one-dim-buffer.set-now(-45, [list: 0]) one-dim-buffer.set-now(9, [list: 2]) one-dim-buffer.set-now(0, [list: 4]) one-dim-buffer.set-now((4 / 3), [list: 5]) one-dim-buffer.set-now(-3.42, [list: 6]) one-dim-buffer.get-all-now() is-roughly [list: -45, 0, 9, 0, 0, (4 / 3), -3.42] two-dim-buffer = make-buffer([list: 2, 2]) two-dim-buffer.set-now(4, [list: 0, 0]) two-dim-buffer.set-now(3, [list: 0, 1]) two-dim-buffer.set-now(2, [list: 1, 0]) two-dim-buffer.set-now(1, [list: 1, 1]) two-dim-buffer.get-all-now() is-roughly [list: 4, 3, 2, 1] end

.to-tensor :: () -> Tensor

Creates an immutable Tensor from the TensorBuffer.

Examples:

check: one-dim-buffer = make-buffer([list: 7]) one-dim-buffer.set-now(-45, [list: 0]) one-dim-buffer.set-now(9, [list: 2]) one-dim-buffer.set-now(0, [list: 4]) one-dim-buffer.set-now(-3.42, [list: 6]) one-dim-buffer.to-tensor().shape() is [list: 7] one-dim-buffer.to-tensor().data-now() is-roughly [list: -45, 0, 9, 0, 0, 0, -3.42] two-dim-buffer = make-buffer([list: 2, 2]) two-dim-buffer.set-now(4, [list: 0, 0]) two-dim-buffer.set-now(3, [list: 0, 1]) two-dim-buffer.set-now(2, [list: 1, 0]) two-dim-buffer.set-now(1, [list: 1, 1]) two-dim-buffer.to-tensor().shape() is [list: 2, 2] two-dim-buffer.to-tensor().data-now() is-roughly [list: 4, 3, 2, 1] end

3.28.4 Models

Models represent a collection of Layers, and define a series of inputs and outputs. They are one of the primary abstractions used in TensorFlow, and can be trained, evaluated, and used for prediction.

There are two types of models in TensorFlow: Sequential, where the outputs of one Layer are the inputs to the next Layer, and Model, which is more generic and supports arbitrary, non-cyclic graphs of Layers.

3.28.4.1 Generic Models

A Model is a data structure that consists of Layers and defines inputs and outputs. It is more generic than Sequential models as it supports arbitrary, non-cyclic graphs of Layers.

is-model :: (val :: Any) -> Boolean

Returns true if val is a Model; otherwise, returns false.

make-model :: (config :: Object) -> Model

Creates a new generic Model.

3.28.4.2 Sequential Models

A Sequential model is a model where the outputs of one Layer are the inputs to the next Layer. That is, the model topology is a simple "stack" of layers, with no branching or skipping.

As a result, the first Layer passed to a Sequential model must have a defined input shape. This means that the LayerConfig used to instantiate the first Layer must have a defined input-shape or batch-input-shape parameter.

is-sequential :: (val :: Any) -> Boolean

Returns true if val is a Sequential; otherwise, returns false.

Creates a new Sequential model.

.add :: (layer :: Layer) -> Nothing

Adds a Layer on top of the Sequential’s stack.

.compile :: (config :: Object) -> Nothing

Configures and prepares the Sequential model for training and evaluation.

Compiling outfits the Sequential with an optimizer, loss, and/or metrics. Calling .fit or Calling .evaluate on an un-compiled model will raise an error.

.evaluate :: (
x :: Tensor,
y :: Tensor,
config :: Object
)
-> Tensor

Returns the loss value & metrics values for the model in test mode.

Loss and metrics parameters should be specified in a call to .compile before calling this method.

.predict :: (x :: Tensor, config :: Object) -> Tensor

Generates output predictions for the input samples.

Computation is done in batches.

.predict-on-batch :: (x :: Tensor) -> Tensor

Returns predictions for a single batch of samples.

.fit :: (
x :: Tensor,
y :: Tensor,
config :: Object,
epoch-callback :: (Number, Object -> Nothing)
)
-> Nothing

Trains the model for a fixed number of epochs (iterations on a dataset).

3.28.5 SymbolicTensors

SymbolicTensors are placeholders for Tensors without any concrete value.

They are most often encountered when building a graph of Layers for a Model that takes in some kind of unknown input.

Returns true if val is a SymbolicTensor; otherwise, returns false.

3.28.5.1 SymbolicTensor Constructors

Creates a new SymbolicTensor with the input shape, not including the batch size.

none values in the input List represent dimensions of arbitrary length.

Creates a new SymbolicTensor with the input shape, where the first element in the input List is the batch size.

none values in the input List represent dimensions of arbitrary length.

3.28.5.2 SymbolicTensor Methods

.shape :: () -> List<Option<NumInteger>>

Returns the shape of the SymbolicTensor. none values in the output List represent dimensions of arbitrary length.

3.28.6 Layers

Layers are the primary building block for constructing a Model. Each Layer will typically perform some computation to transform its input to its output.

Layers will automatically take care of creating and initializing the various internal variables/weights they need to function.

is-layer :: (val :: Any) -> Boolean

Returns true if val is a Layer; otherwise, returns false.

3.28.6.1 Layer-Specific Datatypes

LayerConfigs are used to construct Layers.

A LayerConfig is an Object that describes the properties of a Layer.

Every Layer can allow for different options in the LayerConfig used to construct them. Those options are specified underneath each Layer constructor. Additionally, the following options are permitted in every LayerConfig:

All options allowed in a given Layer’s LayerConfig are optional unless otherwise stated.

A String that specifies a TensorFlow activation function. The following strings are options:

A String that specifies a TensorFlow initialization method. The following strings are options:

A String that specifies a TensorFlow constraint function. The following strings are options:

A String that specifies a TensorFlow regularizer function. The following strings are options:

A String that specifies a TensorFlow tensor data format. The following strings are options:

A String that specifies a TensorFlow padding method. The following strings are options:

3.28.6.2 Basic Layers

Applies an element-wise activation function to an output.

Other layers, most notably dense-layers, can also apply activation functions. This Layer can be used to extract the values before and after the activation.

In addition to the default LayerConfig options, the config passed to this constructor can also contain:

dense-layer :: (config :: LayerConfig) -> Layer

Creates a dense (fully-connected) Layer.

This Layer implements the operation output = activation(dot(input, kernel) + bias), where activation is the element-wise activation function passed as the activation argument, kernel is a weights matrix created by the Layer, and bias is a bias vector created by the layer if the use-bias option is set to true.

In addition to the default LayerConfig options, the config passed to this constructor can also contain:

dropout-layer :: (config :: LayerConfig) -> Layer

Applies dropout to the input.

Dropout consists of randomly setting a fraction rate of input units to 0 at each update during training time, which helps prevent overfitting. See http://www.cs.toronto.edu/~rsalakhu/papers/srivastava14a.pdf for more information.

In addition to the default LayerConfig options, the config passed to this constructor can also contain:

Maps positive integers (indices) into dense vectors of fixed size.

The input shape of this layer is a two-dimensional Tensor with shape [list: batch-size, sequence-length].

The output shape of this layer is a three-dimensional Tensor with shape [list: batch-size, sequence-length, output-dim].

In addition to the default LayerConfig options, the config passed to this constructor can also contain:

flatten-layer :: (config :: LayerConfig) -> Layer

Flattens the input. Does not affect the batch size.

A flatten-layer flattens each batch in its inputs to one dimension (making the output two dimensional).

The config passed to this constructor does not support any additional options other than the default LayerConfig options.

Repeats the input num-repeats times in a new dimension.

In addition to the default LayerConfig options, the config passed to this constructor can also contain:

reshape-layer :: (config :: LayerConfig) -> Layer

Reshapes an input to a certain shape.

The input shape can be arbitrary, although all dimensions in the input shape must be fixed.

The output shape is [list: batch-size, target-shape.get(0), ..., target-shape.get(i)].

In addition to the default LayerConfig options, the config passed to this constructor can also contain:

3.28.6.3 Convolutional Layers

conv-1d-layer :: (config :: LayerConfig) -> Layer

A one-dimensional convolution Layer.

This layer creates a convolution kernel that is convolved with the layer input over a single spatial (or temporal) dimension to produce a Tensor of outputs.

In addition to the default LayerConfig options, the config passed to this constructor can also contain:

conv-2d-layer :: (config :: LayerConfig) -> Layer

A two-dimensional convolution Layer.

This layer creates a convolution kernel that is convolved with the layer input to produce a Tensor of outputs.

In addition to the default LayerConfig options, the config passed to this constructor can also contain:

Transposed convolutional Layer. This is sometimes known as a "deconvolution" layer.

The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution; for example, from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution.

In addition to the default LayerConfig options, the config passed to this constructor can also contain:

Crops an two-dimensional input at the top, bottom, left, and right side (for example, image data).

In addition to the default LayerConfig options, the config passed to this constructor can also contain:

Depthwise separable two-dimensional convolution.

A depthwise separable convolution consists of performing just the first step in a depthwise spatial convolution (which acts on each input channel separately). The depth-multiplier argument controls how many output channels are generated per input channel in the depthwise step.

In addition to the default LayerConfig options, the config passed to this constructor can also contain:

3.28.6.4 Merge Layers

add-layer :: (config :: LayerConfig) -> Layer

average-layer :: (config :: LayerConfig) -> Layer

maximum-layer :: (config :: LayerConfig) -> Layer

minimum-layer :: (config :: LayerConfig) -> Layer

multiply-layer :: (config :: LayerConfig) -> Layer
3.28.6.5 Normalization Layers

3.28.6.6 Pooling Layers

3.28.6.7 Recurrent Layers

gru-layer :: (config :: LayerConfig) -> Layer

gru-cell-layer :: (config :: LayerConfig) -> Layer

lstm-layer :: (config :: LayerConfig) -> Layer

rnn-layer :: (config :: LayerConfig) -> Layer

3.28.6.8 Wrapper Layers

3.28.7 Optimizers

Optimizers are used to perform training operations and compute gradients.

Optimizers eagerly compute gradients. This means that when a user provides a function that is a combination of TensorFlow operations to an Optimizer, the Optimizer automatically differentiates that function’s output with respect to its inputs.

is-optimizer :: (val :: Any) -> Boolean

Returns true if val is an Optimizer; otherwise, returns false.

3.28.7.1 Optimizer Constructors

There are many different types of Optimizers that use different formulas to compute gradients.

train-sgd :: (learning-rate :: Number) -> Optimizer

Constructs an Optimizer that uses a stochastic gradient descent algorithm, where learning-rate is the learning rate to use for the algorithm.

train-momentum :: (learning-rate :: Number, momentum :: Number) -> Optimizer

Constructs an Optimizer that uses a momentum gradient descent algorithm, where learning-rate is the learning rate to use for the algorithm and momentum is the momentum to use for the algorithm.

See http://proceedings.mlr.press/v28/sutskever13.pdf.

train-adagrad :: (learning-rate :: Number, initial-accumulator :: Option<NumPositive>) -> Optimizer

Constructs an Optimizer that uses the Adagrad algorithm, where learning-rate is the learning rate to use for the Adagrad gradient descent algorithm.

If not none, initial-accumulator is the positive, starting value for the accumulators in the Adagrad algorithm. If initial-accumulator is specified but is not positive, the function raises an error.

See http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf or http://ruder.io/optimizing-gradient-descent/index.html#adagrad.

train-adadelta :: (
learning-rate :: Option<Number>,
rho :: Option<Number>,
epsilon :: Option<Number>
)
-> Optimizer

Constructs an Optimizer that uses the Adadelta algorithm.

If not none, learning-rate is the learning rate to use for the Adamax gradient descent algorithm, rho is the learning rate decay over each update, and epsilon is a constant used to better condition the gradient updates.

See https://arxiv.org/abs/1212.5701.

train-adam :: (
learning-rate :: Option<Number>,
beta-1 :: Option<Number>,
beta-2 :: Option<Number>,
epsilon :: Option<Number>
)
-> Optimizer

Constructs an Optimizer that uses the Adam algorithm.

If not none, learning-rate is the learning rate to use for the Adamax gradient descent algorithm, beta-1 is the exponential decay rate for the first moment estimates, beta-2 is the exponential decay rate for the second moment estimates, and epsilon is a small constant for numerical stability.

See https://arxiv.org/abs/1412.6980.

train-adamax :: (
learning-rate :: Option<Number>,
beta-1 :: Option<Number>,
beta-2 :: Option<Number>,
epsilon :: Option<Number>,
decay :: Option<Number>
)
-> Optimizer

Constructs an Optimizer that uses the Adamax algorithm.

If not none, learning-rate is the learning rate to use for the Adamax gradient descent algorithm, beta-1 is the exponential decay rate for the first moment estimates, beta-2 is the exponential decay rate for the second moment estimates, epsilon is a small constant for numerical stability, and decay is the learning rate decay over each update.

See https://arxiv.org/abs/1412.6980.

train-rmsprop :: (
learning-rate :: Number,
decay :: Option<Number>,
momentum :: Option<Number>,
epsilon :: Option<Number>,
is-centered :: Boolean
)
-> Optimizer

Constructs an Optimizer that uses RMSProp gradient descent, where learning-rate is the learning rate to use for the RMSProp gradient descent algorithm.

If not none, decay represents the discounting factor for the history/coming gradient, momentum represents the momentum to use for the RMSProp gradient descent algorithm, and epsilon is a small value to avoid division-by-zero errors.

If is-centered is true, gradients are normalized by the estimated varience of the gradient.

See these slides from the University of Toronto for a primer on RMSProp.

Note: This TensorFlow.js implementation uses plain momentum and is not the "centered" version of RMSProp.

3.28.7.2 Optimizer Methods

.minimize :: (f :: ( -> Tensor), variables :: List<Tensor>) -> Tensor

Executes f and minimizes the scalar output of f by computing gradients of y with with respect to the list of trainable, variable Tensors provided by variables.

f must be a thunk that returns a scalar Tensor. The method then returns the scalar Tensor produced by f.

If variables is empty, the Optimizer will default to training all trainable variables that have been instantiated.