3.28 tensorflow
3.28.1 Tensors
For example, a tensor could be a one-dimensional matrix (a vector), a three-dimensional matrix (a cube), a zero-dimensional matrix (a single number), or a higher dimensional structure that is more difficult to visualize.
This is because TensorFlow.js (the library that the tensorflow library is built on) stores Tensor values in JavaScript Float32Arrays for performance reasons.
For performance reasons, Tensors do not support arbitrary precision. Retrieving values from a Tensor using .data-now always returns a List<Roughnum>.
Since Tensors are immutable, all operations always return new Tensors and never modify the input Tensors. The exception to this is when a Tensor is transformed into a mutable Tensor using the make-variable function or the .to-variable method. These "variable tensors" can be modified by Optimizers.
3.28.1.1 Tensor Constructors
Creates a new Tensor with the given values.
Every Tensor created with this constructor is one-dimensional. Use .as-1d, .as-2d, .as-3d, .as-4d, or .reshape to change the shape of a Tensor after instantiating it.
[tensor: 1, 2, 3] # a size-3 tensor [tensor: 1.4, 5.2, 0.4, 12.4, 14.3, 6].as-2d(3, 2) # a 3 x 2 tensor [tensor: 9, 4, 0, -32, 23, 1, 3, 2].as-3d(2, 2, 2) # a 2 x 2 x 2 tensor
Returns true if val is a Tensor; otherwise, returns false.
check: is-tensor([tensor: 1, 2, 3]) is true is-tensor(true) is false is-tensor(0) is false is-tensor([list: 1, 2, 3]) is false end
Creates a new Tensor with the values in the input List.
Similar to the tensor constructor, all Tensors created using list-to-tensor are one-dimensional by default. Use .as-1d, .as-2d, .as-3d, .as-4d, or .reshape to change the shape of a Tensor after instantiating it.
check: list-to-tensor(empty) satisfies is-tensor list-to-tensor([list: 5, 3, 4, 7]) satisfies is-tensor list-to-tensor(empty).data-now() is empty list-to-tensor([list: 9, 3, 2, 3]).data-now() is-roughly [list: 9, 3, 2, 3] list-to-tensor([list: 3, 2, 1, 0, 4, 9]).as-2d(2, 3).shape() is [list: 2, 3] end
Creates a new Tensor of rank-0 with the given value.
The same functionality can be achieved with the tensor constructor and the .as-scalar method, but it’s recommended to use make-scalar as it makes the code more readable.
check: make-scalar(1).size() is 1 make-scalar(~12.3).shape() is empty make-scalar(2.34).data-now() is-roughly [list: 2.34] end
Creates a Tensor with the input shape where all of the entries are value.
check: fill([list: 0], 1).data-now() is-roughly [list: ] fill([list: 3], 5).data-now() is-roughly [list: 5, 5, 5] fill([list: 3, 2], -3).data-now() is-roughly [list: -3, -3, -3, -3, -3, -3] end
Returns a Tensor whose values are an evenly spaced sequence of numbers over the range [start, stop]. num-values is the number of entries in the output Tensor.
check: linspace(0, 3, 1).data-now() is-roughly [list: 0] linspace(10, 11, 1).data-now() is-roughly [list: 10] linspace(5, 1, 5).data-now() is-roughly [list: 5, 4, 3, 2, 1] linspace(0, 9, 10).data-now() is-roughly [list: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9] linspace(0, 4, 9).data-now() is-roughly [list: 0, 0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4] end
Returns a Tensor with the given shape where all of the entries are ones.
check: ones([list: 0]).data-now() is-roughly [list: ] ones([list: 4]).data-now() is-roughly [list: 1, 1, 1, 1] two-dim = ones([list: 3, 2]) two-dim.shape() is [list: 3, 2] two-dim.data-now() is-roughly [list: 1, 1, 1, 1, 1, 1] end
Returns a Tensor with the given shape where all of the entries are zeros.
check: zeros([list: 0]).data-now() is-roughly [list: ] zeros([list: 4]).data-now() is-roughly [list: 0, 0, 0, 0] two-dim = zeros([list: 3, 2]) two-dim.shape() is [list: 3, 2] two-dim.data-now() is-roughly [list: 0, 0, 0, 0, 0, 0] end
- multinomial :: (
- logits :: Tensor,
- num-samples :: NumPositive,
- seed :: Option<Number>,
- is-normalized :: Boolean
- )
- -> Tensor
Creates a new Tensor where all of the values are sampled from a multinomial distribution.
logits should be a Tensor representing a one-dimensional array containing with unnormalized log-probabilities, or a two-dimensional array of structure [batch-size, num-outcomes].
num-samples is the number of samples to draw for each row slice. seed represents the random seed to use when generating values; if none, the seed is randomly generated. normalized designates whether or not the provided logits are normalized true probabilities (i.e: they sum to 1).
check: three-dim = [tensor: 1, 1, 1, 1, 1, 1, 1, 1].as-3d(2, 2, 2) multinomial(three-dim, 2, none, false) raises "must be a one-dimensional or two-dimensional Tensor" multinomial([tensor: ], 1, none, false) raises "must have at least two possible outcomes" multinomial([tensor: 0.8], 7, none, false) raises "must have at least two possible outcomes" multinomial([tensor: 1.0, 0.0], 1, none, true).shape() is [list: 1] multinomial([tensor: 1.0, 0.0], 3, none, true).shape() is [list: 3] multinomial([tensor: 0.3, 0.5, 0.7], 10, none, false).shape() is [list: 10] end
- random-normal :: (
- shape :: List<NumInteger>,
- mean :: Option<Number>,
- standard-deviation :: Option<Number>
- )
- -> Tensor
Creates a new Tensor with the given shape (represented as values in the input List<Number> shape) where all of the values are sampled from a normal distribution.
mean is the mean of the normal distribution and standard-deviation is the standard deviation of the normal distribution. If none, the respective parameters are set to the TensorFlow.js defaults.
check: random-normal(empty, none, none).size() is 1 random-normal(empty, none, none).shape() is empty random-normal([list: 4, 3], none, none).shape() is [list: 4, 3] random-normal([list: 2, 5, 3], none, none).shape() is [list: 2, 5, 3] end
- random-uniform :: (
- shape :: List<NumInteger>,
- min-val :: Option<Number>,
- max-val :: Option<Number>
- )
- -> Tensor
Creates a new Tensor with the given shape (represented as values in the input List) where all of the values are sampled from a uniform distribution.
min-val is the lower bound on the range of random values to generate and max-val is the upper bound on the range of random values to generate. If none, the respective parameters are set to the TensorFlow.js defaults.
check: random-uniform(empty, none, none).size() is 1 random-uniform(empty, none, none).shape() is empty random-uniform([list: 1, 3], none, none).shape() is [list: 1, 3] random-uniform([list: 5, 4, 8], none, none).shape() is [list: 5, 4, 8] lower-bound = 1 upper-bound = 10 random-data = random-uniform([list: 20], some(lower-bound), some(upper-bound)) for each(data-point from random-data.data-now()): data-point satisfies lam(x): (x >= lower-bound) and (x <= upper-bound) end end end
Creates a new, mutable Tensor initialized to the values of the input Tensor.
The same functionality can be achieved with the .to-variable method.
check: make-variable([tensor: ]).data-now() is-roughly empty make-variable([tensor: 1]).data-now() is-roughly [list: 1] # We can perform normal Tensor operations on mutable Tensors: two-dim = [tensor: 4, 5, 3, 9].as-2d(2, 2) make-variable(two-dim).size() is 4 make-variable(two-dim).shape() is [list: 2, 2] make-variable(two-dim).data-now() is-roughly [list: 4, 5, 3, 9] make-variable(two-dim).as-3d(4, 1, 1).shape() is [list: 4, 1, 1] end
3.28.1.2 Tensor Methods
Returns the size of the Tensor (the number of values stored in the Tensor).
check: make-scalar(4.21).size() is 1 [tensor: 6.32].size() is 1 [tensor: 1, 2, 3].size() is 3 [tensor: 1.4, 5.2, 0.4, 12.4, 14.3, 6].as-2d(3, 2).size() is 6 end
Returns a List<NumInteger> representing the shape of the Tensor. Each element in the List<NumInteger> corresponds to the size in each dimension.
check: make-scalar(3).shape() is empty [tensor: 9].shape() is [list: 1] [tensor: 8, 3, 1].shape() is [list: 3] [tensor: 0, 0, 0, 0, 0, 0].as-2d(3, 2).shape() is [list: 3, 2] end
Constructs a new, one-dimensional Tensor from the values of the original Tensor.
check: a = [tensor: 1, 2, 3, 4, 5, 6].as-2d(3, 2) a.shape() is [list: 3, 2] a.flatten().shape() is [list: 6] b = make-scalar(12) b.shape() is empty b.flatten().shape() is [list: 1] end
Constructs a new, zero-dimensional Tensor from the values of the original, size-1 Tensor.
Raises an error if the calling Tensor is not size-1.
check: size-one = [tensor: 1] size-one.as-scalar().shape() is empty size-one.shape() is [list: 1] # doesn't modify shape of original tensor size-two = [tensor: 1, 2] size-two.as-scalar() raises "Tensor was size-2 but `as-scalar` requires the tensor to be size-1" end
Constructs a new, rank-1 Tensor from the values of the original Tensor.
The same functionality can be achieved with .reshape, but it’s recommended to use .as-1d as it makes the code more readable.
check: one-dim = [tensor: 1] two-dim = [tensor: 4, 3, 2, 1].as-2d(2, 2) three-dim = [tensor: 0, 1, 2, 3, 4, 5, 6, 7, 8].as-3d(3, 1, 3) one-dim.shape() is [list: 1] one-dim.as-1d().shape() is [list: 1] two-dim.shape() is [list: 2, 2] two-dim.as-1d().shape() is [list: 4] three-dim.shape() is [list: 3, 1, 3] three-dim.as-1d().shape() is [list: 9] end
Constructs a new, rank-2 Tensor with the input dimensions from the values of the original Tensor.
The number of elements implied by the input dimensions must be the same as the number of elements in the calling Tensor. Otherwise, the method raises an error.
The same functionality can be achieved with .reshape, but it’s recommended to use .as-2d as it makes the code more readable.
check: one-dim = [tensor: 1] two-dim = [tensor: 0, 1, 2, 3, 4, 5].as-2d(3, 2) three-dim = [tensor: 4, 3, 2, 1, 0, -1, -2, -3].as-3d(2, 2, 2) one-dim.shape() is [list: 1] one-dim.as-2d(1, 1).shape() is [list: 1, 1] two-dim.shape() is [list: 3, 2] two-dim.as-2d(2, 3).shape() is [list: 2, 3] three-dim.shape() is [list: 2, 2, 2] three-dim.as-2d(4, 2).shape() is [list: 4, 2] one-dim.as-2d(2, 1) raises "Cannot reshape" two-dim.as-2d(3, 3) raises "Cannot reshape" three-dim.as-2d(5, 4) raises "Cannot reshape" end
- .as-3d :: (
- rows :: NumInteger,
- columns :: NumInteger,
- depth :: NumInteger
- )
- -> Tensor
Constructs a new, rank-3 Tensor with the input dimensions from the values of the original Tensor.
The number of elements implied by the input dimensions must be the same as the number of elements in the calling Tensor. Otherwise, the method raises an error.
The same functionality can be achieved with .reshape, but it’s recommended to use .as-3d as it makes the code more readable.
check: one-dim = [tensor: 1] two-dim = [tensor: 0, 1, 2, 3, 4, 5, 6, 7].as-2d(4, 2) one-dim.shape() is [list: 1] one-dim.as-3d(1, 1, 1).shape() is [list: 1, 1, 1] two-dim.shape() is [list: 4, 2] two-dim.as-3d(2, 2, 2).shape() is [list: 2, 2, 2] one-dim.as-3d(2, 1, 1) raises "Cannot reshape" two-dim.as-3d(4, 3, 2) raises "Cannot reshape" end
- .as-4d :: (
- rows :: NumInteger,
- columns :: NumInteger,
- depth1 :: NumInteger,
- depth2 :: NumInteger
- )
- -> Tensor
Constructs a new, rank-4 Tensor with the input dimensions from the values of the original Tensor.
The number of elements implied by the input dimensions must be the same as the number of elements in the calling Tensor. Otherwise, the method raises an error.
The same functionality can be achieved with .reshape, but it’s recommended to use .as-4d as it makes the code more readable.
check: one-dim = [tensor: 1] two-dim = [tensor: 0, 1, 2, 3, 4, 5, 6, 7].as-2d(4, 2) one-dim.shape() is [list: 1] one-dim.as-4d(1, 1, 1, 1).shape() is [list: 1, 1, 1, 1] two-dim.shape() is [list: 4, 2] two-dim.as-4d(2, 2, 1, 2).shape() is [list: 2, 2, 1, 2] one-dim.as-4d(2, 1, 1, 1) raises "Cannot reshape" two-dim.as-4d(2, 2, 2, 2) raises "Cannot reshape" end
Constructs a new Tensor from the values of the original Tensor with all of the values cast to the input datatype.
The possible data-types are "float32", "int32", or "bool". Any other data-type will raise an error.
check: some-tensor = [tensor: 1, 3, 5, 8] some-tensor.as-type("float32") does-not-raise some-tensor.as-type("int32") does-not-raise some-tensor.as-type("bool") does-not-raise some-tensor.as-type("invalid") raises "Attempted to cast tensor to invalid type" end
Returns a List containing the data in the Tensor.
check: [tensor: ].data-now() is-roughly [list: ] [tensor: 1].data-now() is-roughly [list: 1] [tensor: 1.43].data-now() is-roughly [list: 1.43] [tensor: -3.21, 9.4, 0.32].data-now() is-roughly [list: -3.21, 9.4, 0.32] end
Constructs a new Tensor from the values of the original Tensor with all of the values cast to the "float32" datatype.
check: [tensor: 0].to-float().data-now() is-roughly [list: 0] [tensor: 1].to-float().data-now() is-roughly [list: 1] [tensor: 0.42].to-float().data-now() is-roughly [list: 0.42] [tensor: 4, 0.32, 9.40, 8].to-float().data-now() is-roughly [list: 4, 0.32, 9.40, 8] end
Constructs a new Tensor from the values of the original Tensor with all of the values cast to the "int32" datatype.
check: [tensor: 0].to-int().data-now() is-roughly [list: 0] [tensor: 1].to-int().data-now() is-roughly [list: 1] [tensor: 0.999999].to-int().data-now() is-roughly [list: 0] [tensor: 1.52, 4.12, 5.99].to-int().data-now() is-roughly [list: 1, 4, 5] end
Constructs a new Tensor from the values of the original Tensor with all of the values cast to the "bool" datatype.
check: [tensor: 0].to-bool().data-now() is-roughly [list: 0] [tensor: 1].to-bool().data-now() is-roughly [list: 1] [tensor: 0.42].tox-bool().data-now() is-roughly [list: 1] [tensor: 1, 4, 5].to-bool().data-now() is-roughly [list: 1, 1, 1] end
Constructs a new TensorBuffer from the values of the original Tensor.
check: empty-buffer = [tensor: ].to-buffer() empty-buffer satisfies is-tensor-buffer empty-buffer.get-all-now() is-roughly [list: ] some-shape = [list: 2, 2] some-values = [list: 4, 5, 9, 3] some-tensor = list-to-tensor(some-values).reshape(some-shape) some-buffer = some-tensor.to-buffer() some-buffer satisfies is-tensor-buffer some-buffer.get-all-now() is-roughly some-values some-buffer.to-tensor().shape() is some-shape end
Constructs a new, mutable Tensor from the values of the original Tensor. Equivalent to applying make-variable on the calling Tensor.
check: [tensor: ].to-variable() does-not-raise [tensor: 4, 5, 1].to-variable() does-not-raise [tensor: 0, 5, 1, 9, 8, 4].as-2d(3, 2).to-variable() does-not-raise end
Constructs a new Tensor with the input dimensions new-shape from the values of the original Tensor.
The number of elements implied by new-shape must be the same as the number of elements in the calling Tensor. Otherwise, the method raises an error.
When reshaping a Tensor to be 0-, 1-, 2-, 3-, or 4-dimensional, it’s recommended to use .as-scalar, .as-1d, .as-2d, .as-3d, or .as-4d as they make the code more readable.
check: [tensor: ].reshape([list: ]) raises "Cannot reshape" [tensor: 3, 2].reshape([list: ]) raises "Cannot reshape" [tensor: 3, 2].reshape([list: 6]) raises "Cannot reshape" [tensor: 3, 2, 1].reshape([list: 2, 4]) raises "Cannot reshape" [tensor: 1].reshape([list: 1]).shape() is [list: 1] [tensor: 1].reshape([list: 1, 1, 1]).shape() is [list: 1, 1, 1] [tensor: 1].reshape([list: 1, 1, 1, 1, 1]).shape() is [list: 1, 1, 1, 1, 1] [tensor: 1, 4].reshape([list: 2, 1]).shape() is [list: 2, 1] [tensor: 1, 4, 4, 5, 9, 3].reshape([list: 3, 2]).shape() is [list: 3, 2] end
Returns a Tensor that has expanded rank, by inserting a dimension into the Tensor’s shape at the given dimension index axis. If axis is none, the method inserts a dimension at index 0 by default.
check: one-dim = [tensor: 1, 2, 3, 4] one-dim.shape() is [list: 4] one-dim.expand-dims(none).shape() is [list: 1, 4] one-dim.expand-dims(some(1)).shape() is [list: 4, 1] one-dim.expand-dims(some(2)) raises "input axis must be less than or equal to the rank of the tensor" end
Returns a Tensor with dimensions of size 1 removed from the shape.
If axes is not none, the method only squeezes the dimensions listed as indices in axes. The method will raise an error if one of the dimensions specified in axes is not of size 1.
check: multi-dim = [tensor: 1, 2, 3, 4].reshape([list: 1, 1, 1, 4, 1]) multi-dim.shape() is [list: 1, 1, 1, 4, 1] multi-dim.squeeze(none).shape() is [list: 4] multi-dim.squeeze(some([list: 0])).shape() is [list: 1, 1, 4, 1] multi-dim.squeeze(some([list: 4])).shape() is [list: 1, 1, 1, 4] multi-dim.squeeze(some([list: 1, 2])).shape() is [list: 1, 4, 1] multi-dim.squeeze(some([list: 7])) raises "Cannot squeeze axis 7 since the axis does not exist" multi-dim.squeeze(some([list: 3])) raises "Cannot squeeze axis 3 since the dimension of that axis is 4, not 1" end
Constructs a new Tensor that is a copy of the original Tensor.
check: some-tensor = [tensor: 1, 2, 3, 4] new-tensor = some-tensor.clone() new-tensor.size() is some-tensor.size() new-tensor.shape() is some-tensor.shape() new-tensor.data-now() is-roughly some-tensor.data-now() end
Adds x to the Tensor. This is equivalent to add-tensors(self, x).
check: [tensor: 1].add([tensor: 1]).data-now() is-roughly [list: 2] [tensor: 1, 3].add([tensor: 1]).data-now() is-roughly [list: 2, 4] [tensor: 1, 3].add([tensor: 5, 1]).data-now() is-roughly [list: 6, 4] [tensor: 1, 3, 4].add([tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" end
Subtracts x from the Tensor. This is equivalent to subtract-tensors(self, x).
check: [tensor: 1].subtract([tensor: 1]).data-now() is-roughly [list: 0] [tensor: 1, 3].subtract([tensor: 1]).data-now() is-roughly [list: 0, 2] [tensor: 1, 3].subtract([tensor: 5, 1]).data-now() is-roughly [list: -4, 2] [tensor: 1, 3, 4].subtract([tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" end
Multiplies the Tensor by x. This is equivalent to multiply-tensors(self, x).
check: [tensor: 1].multiply([tensor: 1]).data-now() is-roughly [list: 1] [tensor: 1, 3].multiply([tensor: 1]).data-now() is-roughly [list: 1, 3] [tensor: 1, 3].multiply([tensor: 5, 1]).data-now() is-roughly [list: 5, 3] [tensor: 1, 3, 4].multiply([tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" end
Divides the Tensor by x. This is equivalent to divide-tensors(self, x).
check: [tensor: 1].divide([tensor: 1]).data-now() is-roughly [list: 1] [tensor: 1, 3].divide([tensor: 1]).data-now() is-roughly [list: 1, 3] [tensor: 1, 3].divide([tensor: 5, 1]).data-now() is-roughly [list: 0.2, 3] [tensor: 1, 3, 4].divide([tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" [tensor: 1].divide([tensor: 0]) raises "The argument Tensor cannot contain 0" [tensor: 4.23].divide([tensor: 7.65, 1.43, 0, 2.31]) raises "The argument Tensor cannot contain 0" end
Divides the Tensor by x, with the result rounded with the floor function. This is equivalent to floor-divide-tensors(self, x).
check: [tensor: 1].floor-divide([tensor: 1]).data-now() is-roughly [list: 1] [tensor: 1, 3].floor-divide([tensor: 1]).data-now() is-roughly [list: 1, 3] [tensor: 1, 3].floor-divide([tensor: 5, 1]).data-now() is-roughly [list: 0, 3] [tensor: 1, 3, 4].floor-divide([tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" [tensor: 1].floor-divide([tensor: 0]) raises "The argument Tensor cannot contain 0" [tensor: 4.23].floor-divide([tensor: 7.65, 1.43, 0]) raises "The argument Tensor cannot contain 0" end
Returns the maximum of the Tensor and x. This is equivalent to tensor-max(self, x).
check: [tensor: 0].max([tensor: 1]).data-now() is-roughly [list: 1] [tensor: 1, 3].max([tensor: 1]).data-now() is-roughly [list: 1, 3] [tensor: 1, 3].max([tensor: 200]).data-now() is-roughly [list: 200, 200] [tensor: 1, 3].max([tensor: 5, 1]).data-now() is-roughly [list: 5, 3] [tensor: 1, 3, 4].max([tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" end
Returns the minimum of the Tensor and x. This is equivalent to tensor-min(self, x).
check: [tensor: 0].min([tensor: 1]).data-now() is-roughly [list: 0] [tensor: 1, 3].min([tensor: 1]).data-now() is-roughly [list: 1, 1] [tensor: 1, 3].min([tensor: 200]).data-now() is-roughly [list: 1, 3] [tensor: 1, 3].min([tensor: 0]).data-now() is-roughly [list: 0, 0] [tensor: 1, 3, 4].min([tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" end
Computes the modulo of the Tensor and x. This is equivalent to tensor-modulo(self, x).
check: [tensor: 0].modulo([tensor: 1]).data-now() is-roughly [list: 0] [tensor: 1, 3].modulo([tensor: 1]).data-now() is-roughly [list: 0, 0] [tensor: 1, 3].modulo([tensor: 5, 1]).data-now() is-roughly [list: 1, 0] [tensor: 1, 3, 4].modulo([tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" [tensor: 1].modulo([tensor: 0]) raises "The argument Tensor cannot contain 0" [tensor: 1].modulo([tensor: 1, 0]) raises "The argument Tensor cannot contain 0" end
Computes the power of the Tensor to exponent. This is equivalent to tensor-expt(self, x).
check: [tensor: 0].expt([tensor: 1]).data-now() is-roughly [list: 0] [tensor: 1, 3].expt([tensor: 1]).data-now() is-roughly [list: 1, 3] [tensor: 1, 3].expt([tensor: 4]).data-now() is-roughly [list: 1, 81] [tensor: 3, 3].expt([tensor: 5, 1]).data-now() is-roughly [list: 243, 3] [tensor: 1, 3, 4].expt([tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" end
Computes (self - x) * (self - x), element-wise. This is equivalent to squared-difference(self, x).
check: [tensor: 0].squared-difference([tensor: 1]).data-now() is-roughly [list: 1] [tensor: 3].squared-difference([tensor: -3]).data-now() is-roughly [list: 36] [tensor: 1, 3].squared-difference([tensor: 4]).data-now() is-roughly [list: 9, 1] [tensor: 3, 3].squared-difference([tensor: 5, 1]).data-now() is-roughly [list: 4, 4] [tensor: 1, 3, 4].squared-difference([tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" end
3.28.2 Tensor Operations
3.28.2.1 Arithmetic Operations
All arithmetic operations are binary operations that accept two Tensors as arguments. If the size of any axis in either Tensor is greater than 1, the corresponding axis in the other Tensor must be the same size; otherwise, the operation raises an error.
# Valid operations: add-tensors([tensor: 1], [tensor: 1]) add-tensors([tensor: 1, 2, 3], [tensor: 1]) add-tensors([tensor: 1, 2, 3, 4].as-2d(2, 2), [tensor: 1]) add-tensors([tensor: 1, 2], [tensor: 1, 2, 3, 4].as-2d(2, 2)) add-tensors([tensor: 1, 2].as-2d(2, 1), [tensor: 1, 2].as-2d(1, 2)) add-tensors([tensor: 1, 2, 3, 4].as-2d(2, 2), [tensor: 1, 2].as-2d(2, 1)) # Invalid operations: add-tensors([tensor: 1, 2, 3], [tensor: 1, 2]) add-tensors([tensor: 1, 2].as-2d(2, 1), [tensor: 1, 2, 3].as-2d(3, 1))
In some cases, this behavior isn’t intended, so most arithmetic operations have a "strict" counterpart that raises an error if the two input Tensors do not have the same shape.
Adds two Tensors element-wise, A + B.
To assert that a and b are the same shape, use strict-add-tensors.
check: add-tensors([tensor: 1], [tensor: 1]).data-now() is-roughly [list: 2] add-tensors([tensor: 1, 3], [tensor: 1]).data-now() is-roughly [list: 2, 4] add-tensors([tensor: 1, 3], [tensor: 5, 1]).data-now() is-roughly [list: 6, 4] add-tensors([tensor: 1, 3, 4], [tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" end
Subtracts two Tensors element-wise, A – B.
To assert that a and b are the same shape, use strict-subtract-tensors.
check: subtract-tensors([tensor: 1], [tensor: 1]).data-now() is-roughly [list: 0] subtract-tensors([tensor: 1, 3], [tensor: 1]).data-now() is-roughly [list: 0, 2] subtract-tensors([tensor: 1, 3], [tensor: 5, 1]).data-now() is-roughly [list: -4, 2] subtract-tensors([tensor: 1, 3, 4], [tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" end
Multiplies two Tensors element-wise, A * B.
To assert that a and b are the same shape, use strict-multiply-tensors.
check: multiply-tensors([tensor: 1], [tensor: 1]).data-now() is-roughly [list: 1] multiply-tensors([tensor: 1, 3], [tensor: 1]).data-now() is-roughly [list: 1, 3] multiply-tensors([tensor: 1, 3], [tensor: 5, 1]).data-now() is-roughly [list: 5, 3] multiply-tensors([tensor: 1, 3, 4], [tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" end
Divides two Tensors element-wise, A / B.
To assert that a and b are the same shape, use strict-divide-tensors.
check: divide-tensors([tensor: 1], [tensor: 1]).data-now() is-roughly [list: 1] divide-tensors([tensor: 1, 3], [tensor: 1]).data-now() is-roughly [list: 1, 3] divide-tensors([tensor: 1, 3], [tensor: 5, 1]).data-now() is-roughly [list: 0.2, 3] divide-tensors([tensor: 1, 3, 4], [tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" divide-tensors([tensor: 1], [tensor: 0]) raises "The argument Tensor cannot contain 0" divide-tensors([tensor: 4.23], [tensor: 7.65, 1.43, 0, 2.31]) raises "The argument Tensor cannot contain 0" end
Divides two Tensors element-wise, A / B, with the result rounded with the floor function.
check: floor-divide-tensors([tensor: 1], [tensor: 1]).data-now() is-roughly [list: 1] floor-divide-tensors([tensor: 1, 3], [tensor: 1]).data-now() is-roughly [list: 1, 3] floor-divide-tensors([tensor: 1, 3], [tensor: 5, 1]).data-now() is-roughly [list: 0, 3] floor-divide-tensors([tensor: 1, 3, 4], [tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" floor-divide-tensors([tensor: 1], [tensor: 0]) raises "The argument Tensor cannot contain 0" floor-divide-tensors([tensor: 4.23], [tensor: 7.65, 1.43, 0]) raises "The argument Tensor cannot contain 0" end
Returns a Tensor containing the maximum of a and b, element-wise.
To assert that a and b are the same shape, use strict-tensor-max.
check: tensor-max([tensor: 0], [tensor: 1]).data-now() is-roughly [list: 1] tensor-max([tensor: 1, 3], [tensor: 1]).data-now() is-roughly [list: 1, 3] tensor-max([tensor: 1, 3], [tensor: 200]).data-now() is-roughly [list: 200, 200] tensor-max([tensor: 1, 3], [tensor: 5, 1]).data-now() is-roughly [list: 5, 3] tensor-max([tensor: 1, 3, 4], [tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" end
Returns a Tensor containing the minimum of a and b, element-wise.
To assert that a and b are the same shape, use strict-tensor-min.
check: tensor-min([tensor: 0], [tensor: 1]).data-now() is-roughly [list: 0] tensor-min([tensor: 1, 3], [tensor: 1]).data-now() is-roughly [list: 1, 1] tensor-min([tensor: 1, 3], [tensor: 200]).data-now() is-roughly [list: 1, 3] tensor-min([tensor: 1, 3], [tensor: 0]).data-now() is-roughly [list: 0, 0] tensor-min([tensor: 1, 3], [tensor: 5, 1]).data-now() is-roughly [list: 1, 1] tensor-min([tensor: 1, 3, 4], [tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" end
Computes the modulo of a and b, element-wise.
To assert that a and b are the same shape, use strict-tensor-modulo.
check: tensor-modulo([tensor: 0], [tensor: 1]).data-now() is-roughly [list: 0] tensor-modulo([tensor: 1, 3], [tensor: 1]).data-now() is-roughly [list: 0, 0] tensor-modulo([tensor: 1, 3], [tensor: 200]).data-now() is-roughly [list: 1, 3] tensor-modulo([tensor: 1, 3], [tensor: 5, 1]).data-now() is-roughly [list: 1, 0] tensor-modulo([tensor: 1, 3, 4], [tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" end
Computes the power of base to exponent, element-wise.
To ensure that a and b are the same shape, use strict-tensor-expt.
check: tensor-expt([tensor: 0], [tensor: 1]).data-now() is-roughly [list: 0] tensor-expt([tensor: 3], [tensor: -3]).data-now() is-roughly [list: 0.03703703] tensor-expt([tensor: 1, 3], [tensor: 4]).data-now() is-roughly [list: 1, 81] tensor-expt([tensor: 3, 3], [tensor: 5, 1]).data-now() is-roughly [list: 243, 3] tensor-expt([tensor: 1, 3, 4], [tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" end
Computes (a - b) * (a - b), element-wise.
To assert that a and b are the same shape, use strict-squared-difference.
check: squared-difference([tensor: 0], [tensor: 1]).data-now() is-roughly [list: 1] squared-difference([tensor: 3], [tensor: -3]).data-now() is-roughly [list: 36] squared-difference([tensor: 1, 3], [tensor: 4]).data-now() is-roughly [list: 9, 1] squared-difference([tensor: 3, 3], [tensor: 5, 1]).data-now() is-roughly [list: 4, 4] squared-difference([tensor: 1, 3, 4], [tensor: 5, 1]) raises "Tensors could not be applied as binary operation arguments" end
Same as add-tensors, but raises an error if a and b are not the same shape (as determined by .shape).
check: strict-add-tensors([tensor: 1], [tensor: 0]) is-roughly add-tensors([tensor: 1], [tensor: 0]) strict-add-tensors([tensor: -4, -1], [tensor: -8, -2]) is-roughly add-tensors([tensor: -4, -1], [tensor: -8, -2]) strict-add-tensors([tensor: 1], [tensor: 1, 2]) raises "The first tensor does not have the same shape as the second tensor" strict-add-tensors([tensor: 8, 0].as-2d(2, 1), [tensor: 3, 1]) raises "The first tensor does not have the same shape as the second tensor" end
Same as subtract-tensors, but raises an error if a and b are not the same shape (as determined by .shape).
check: strict-subtract-tensors([tensor: 1], [tensor: 0]) is-roughly subtract-tensors([tensor: 1], [tensor: 0]) strict-subtract-tensors([tensor: -4, -1], [tensor: -8, -2]) is-roughly subtract-tensors([tensor: -4, -1], [tensor: -8, -2]) strict-subtract-tensors([tensor: 1], [tensor: 1, 2]) raises "The first tensor does not have the same shape as the second tensor" strict-subtract-tensors([tensor: 8, 0].as-2d(2, 1), [tensor: 3, 1]) raises "The first tensor does not have the same shape as the second tensor" end
Same as multiply-tensors, but raises an error if a and b are not the same shape (as determined by .shape).
check: strict-multiply-tensors([tensor: 1], [tensor: 0]) is-roughly multiply-tensors([tensor: 1], [tensor: 0]) strict-multiply-tensors([tensor: -4, -1], [tensor: -8, -2]) is-roughly multiply-tensors([tensor: -4, -1], [tensor: -8, -2]) strict-multiply-tensors([tensor: 1], [tensor: 1, 2]) raises "The first tensor does not have the same shape as the second tensor" strict-multiply-tensors([tensor: 8, 0].as-2d(2, 1), [tensor: 3, 1]) raises "The first tensor does not have the same shape as the second tensor" end
Same as divide-tensors, but raises an error if a and b are not the same shape (as determined by .shape).
check: strict-divide-tensors([tensor: 1], [tensor: 0]) is-roughly divide-tensors([tensor: 1], [tensor: 0]) strict-divide-tensors([tensor: -4, -1], [tensor: -8, -2]) is-roughly divide-tensors([tensor: -4, -1], [tensor: -8, -2]) strict-divide-tensors([tensor: 1], [tensor: 1, 2]) raises "The first tensor does not have the same shape as the second tensor" strict-divide-tensors([tensor: 8, 0].as-2d(2, 1), [tensor: 3, 1]) raises "The first tensor does not have the same shape as the second tensor" strict-divide-tensors([tensor: 1], [tensor: 0]) raises "The argument Tensor cannot contain 0" strict-divide-tensors([tensor: 1, 1], [tensor: 1, 0]) raises "The argument Tensor cannot contain 0" end
Same as tensor-max, but raises an error if a and b are not the same shape (as determined by .shape).
check: strict-tensor-max([tensor: 1], [tensor: 0]) is-roughly tensor-max([tensor: 1], [tensor: 0]) strict-tensor-max([tensor: -4, -1], [tensor: -8, -2]) is-roughly tensor-max([tensor: -4, -1], [tensor: -8, -2]) strict-tensor-max([tensor: 1], [tensor: 1, 2]) raises "The first tensor does not have the same shape as the second tensor" strict-tensor-max([tensor: 8, 0].as-2d(2, 1), [tensor: 3, 1]) raises "The first tensor does not have the same shape as the second tensor" end
Same as tensor-min, but raises an error if a and b are not the same shape (as determined by .shape).
check: strict-tensor-min([tensor: 1], [tensor: 0]) is-roughly tensor-min([tensor: 1], [tensor: 0]) strict-tensor-min([tensor: -4, -1], [tensor: -8, -2]) is-roughly tensor-min([tensor: -4, -1], [tensor: -8, -2]) strict-tensor-min([tensor: 1], [tensor: 1, 2]) raises "The first tensor does not have the same shape as the second tensor" strict-tensor-min([tensor: 8, 0].as-2d(2, 1), [tensor: 3, 1]) raises "The first tensor does not have the same shape as the second tensor" end
Same as tensor-expt, but raises an error if a and b are not the same shape (as determined by .shape).
check: strict-tensor-expt([tensor: 1], [tensor: 0]) is-roughly tensor-expt([tensor: 1], [tensor: 0]) strict-tensor-expt([tensor: -4, -1], [tensor: -8, -2]) is-roughly tensor-expt([tensor: -4, -1], [tensor: -8, -2]) strict-tensor-expt([tensor: 1], [tensor: 1, 2]) raises "The first tensor does not have the same shape as the second tensor" strict-tensor-expt([tensor: 8, 0].as-2d(2, 1), [tensor: 3, 1]) raises "The first tensor does not have the same shape as the second tensor" end
Same as tensor-modulo, but raises an error if a and b are not the same shape (as determined by .shape).
check: strict-tensor-modulo([tensor: 1], [tensor: 0]) is-roughly tensor-modulo([tensor: 1], [tensor: 0]) strict-tensor-modulo([tensor: -4, -1], [tensor: -8, -2]) is-roughly tensor-modulo([tensor: -4, -1], [tensor: -8, -2]) strict-tensor-modulo([tensor: 1], [tensor: 1, 2]) raises "The first tensor does not have the same shape as the second tensor" strict-tensor-modulo([tensor: 8, 0].as-2d(2, 1), [tensor: 3, 1]) raises "The first tensor does not have the same shape as the second tensor" strict-tensor-modulo([tensor: 1], [tensor: 0]) raises "The argument Tensor cannot contain 0" strict-tensor-modulo([tensor: 1, 1], [tensor: 1, 0]) raises "The argument Tensor cannot contain 0" end
Same as squared-difference, but raises an error if a and b are not the same shape (as determined by .shape).
check: strict-squared-difference([tensor: 1], [tensor: 0]) is-roughly squared-difference([tensor: 1], [tensor: 0]) strict-squared-difference([tensor: -4, -1], [tensor: -8, -2]) is-roughly squared-difference([tensor: -4, -1], [tensor: -8, -2]) strict-squared-difference([tensor: 1], [tensor: 1, 2]) raises "The first tensor does not have the same shape as the second tensor" strict-squared-difference([tensor: 8, 0].as-2d(2, 1), [tensor: 3, 1]) raises "The first tensor does not have the same shape as the second tensor" end
3.28.2.2 Trigonometry Operations
Computes the inverse cosine of the Tensor, element-wise.
All of the values in the input Tensor must be between -1 and 1, inclusive; otherwise, the function raises an error.
check: tensor-acos([tensor: 1]).data-now() is-roughly [list: 0] tensor-acos([tensor: 0]).data-now() is-roughly [list: ~1.5707963] tensor-acos([tensor: -1]).data-now() is-roughly [list: ~3.1415927] tensor-acos([tensor: 0.5, 0.2, 0.6]).data-now() is-roughly [list: ~1.0471975, ~1.3694384, ~0.9272952] tensor-acos([tensor: 10]) raises "Values in the input Tensor must be between -1 and 1, inclusive" tensor-acos([tensor: -1, 0, 16, -2]) raises "Values in the input Tensor must be between -1 and 1, inclusive" end
Computes the inverse hyperbolic cosine of the Tensor, element-wise.
All of the values in the input Tensor must be greater than or equal to 1; otherwise, the function raises an error.
check: tensor-acosh([tensor: 1]).data-now() is-roughly [list: 0] tensor-acosh([tensor: 2]).data-now() is-roughly [list: ~1.3169579] tensor-acosh([tensor: 1, 5, 10, 200]).data-now() is-roughly [list: ~0, ~2.2924315, ~2.9932229, ~5.9914584] tensor-acosh([tensor: 0]) raises "Values in the input Tensor must be at least 1" tensor-acosh([tensor: 4, 1, 10, 32, -2, 82]) raises "Values in the input Tensor must be at least 1" end
Computes the inverse sine of the Tensor, element-wise.
All of the values in the input Tensor must be between -1 and 1, inclusive; otherwise, the function raises an error.
check: # Check one-dimensional usages: tensor-asin([tensor: 1]).data-now() is-roughly [list: ~1.5707963] tensor-asin([tensor: 0]).data-now() is-roughly [list: 0] tensor-asin([tensor: -0.5]).data-now() is-roughly [list: ~-0.5235987] tensor-asin([tensor: 0.5, 0.2, 0.6]).data-now() is-roughly [list: ~0.5235987, ~0.2013579, ~0.6435011] # Check bounding values: tensor-asin([tensor: 9]) raises "Values in the input Tensor must be between -1 and 1, inclusive" tensor-asin([tensor: -1, -2, -3]) raises "Values in the input Tensor must be between -1 and 1, inclusive" end
Computes the inverse hyperbolic sine of the Tensor, element-wise.
check: tensor-asinh([tensor: 0]).data-now() is-roughly [list: 0] tensor-asinh([tensor: 1]).data-now() is-roughly [list: ~0.8813736] tensor-asinh([tensor: -1, -2, -3]).data-now() is-roughly [list: ~-0.8813736, ~-1.4436353, ~-1.8184462] tensor-asinh([tensor: 21, 0, 32, 2]).data-now() is-roughly [list: ~3.7382359, ~0, ~4.1591272, ~1.4436354] end
Computes the inverse tangent of the Tensor, element-wise.
check: tensor-atan([tensor: 0]).data-now() is-roughly [list: 0] tensor-atan([tensor: 1]).data-now() is-roughly [list: ~0.7853981] tensor-atan([tensor: -1]).data-now() is-roughly [list: ~-0.7853981] tensor-atan([tensor: -1, -2, -3]).data-now() is-roughly [list: ~-0.7853981, ~-1.1071487, ~-1.2490458] end
Computes the four-quadrant inverse tangent of a and b, element-wise.
Computes the inverse hyperbolic tangent of the Tensor, element-wise.
All of the values in the input Tensor must be between -1 and 1, exclusive; otherwise, the function raises an error.
check: # Check one-dimensional usages: tensor-atanh([tensor: 0.5]).data-now() is-roughly [list: ~0.5493061] tensor-atanh([tensor: 0]).data-now() is-roughly [list: 0] tensor-atanh([tensor: -0.9]).data-now() is-roughly [list: ~-1.4722193] tensor-atanh([tensor: 0.5, 0.2, 0.6]).data-now() is-roughly [list: ~0.5493061, ~0.2027325, ~0.6931471] # Check bounding values: tensor-atanh([tensor: 1]) raises "Values in the input Tensor must be between -1 and 1, exclusive" tensor-atanh([tensor: -1]) raises "Values in the input Tensor must be between -1 and 1, exclusive" tensor-atanh([tensor: 0, 16, -1, 9, 1]) raises "Values in the input Tensor must be between -1 and 1, exclusive" end
Computes the cosine of the Tensor, element-wise.
check: tensor-cos([tensor: 0]).data-now() is-roughly [list: 1] tensor-cos([tensor: 1]).data-now() is-roughly [list: ~0.5403115] tensor-cos([tensor: -1]).data-now() is-roughly [list: ~0.5403116] tensor-cos([tensor: 6, 2, -4]).data-now() is-roughly [list: ~0.9601798, ~-0.4161523, ~-0.6536576] end
Computes the hyperbolic cosine of the Tensor, element-wise.
check: tensor-cosh([tensor: 0]).data-now() is-roughly [list: 1] tensor-cosh([tensor: 1]).data-now() is-roughly [list: ~1.5430805] tensor-cosh([tensor: -1]).data-now() is-roughly [list: ~1.5430805] tensor-cosh([tensor: -1, -2, -3]).data-now() is-roughly [list: ~1.5430805, ~3.7621955, ~10.0676612] end
Computes the sine of the Tensor, element-wise.
check: tensor-sin([tensor: 0]).data-now() is-roughly [list: 0] tensor-sin([tensor: 1]).data-now() is-roughly [list: ~0.8414709] tensor-sin([tensor: -1]).data-now() is-roughly [list: ~-0.8415220] tensor-sin([tensor: 6, 2, -4]).data-now() is-roughly [list: ~-0.2794162, ~0.9092976, ~0.7568427] tensor-sin([tensor: 21, 0, 32, 2]).data-now() is-roughly [list: ~0.8366656, ~0, ~0.5514304, ~0.9092976] end
Computes the hyperbolic sine of the Tensor, element-wise.
check: tensor-sinh([tensor: 0]).data-now() is-roughly [list: 0] tensor-sinh([tensor: 1]).data-now() is-roughly [list: ~1.1752011] tensor-sinh([tensor: -1]).data-now() is-roughly [list: ~-1.1752011] tensor-sinh([tensor: -1, -2, -3]).data-now() is-roughly [list: ~-1.1752011, ~-3.6268603, ~-10.0178737] tensor-sinh([tensor: 6, 2, -4]).data-now() is-roughly [list: ~201.7131195, ~3.6268601, ~-27.2899169] end
Computes the tangent of the Tensor, element-wise.
check: tensor-tan([tensor: 0]).data-now() is-roughly [list: 0] tensor-tan([tensor: 1]).data-now() is-roughly [list: ~1.5573809] tensor-tan([tensor: -1]).data-now() is-roughly [list: ~-1.5573809] tensor-tan([tensor: 21, 0, 32, 2]).data-now() is-roughly [list: ~-1.5275151, ~0, ~0.6610110, ~-2.1850113] end
Computes the hyperbolic tangent of the Tensor, element-wise.
check: tensor-tanh([tensor: 0]).data-now() is-roughly [list: 0] tensor-tanh([tensor: 1]).data-now() is-roughly [list: ~0.7615941] tensor-tanh([tensor: -1, -2, -3]).data-now() is-roughly [list: ~-0.7615941, ~-0.9640275, ~-0.9950547] tensor-tanh([tensor: 6, 2, -4]).data-now() is-roughly [list: ~0.9999876, ~0.9640275, ~-0.9993293] end
3.28.2.3 Math Operations
Computes the absolute value of the Tensor, element-wise.
check: tensor-abs([tensor: 0]).data-now() is-roughly [list: 0] tensor-abs([tensor: 1]).data-now() is-roughly [list: 1] tensor-abs([tensor: -1]).data-now() is-roughly [list: 1] tensor-abs([tensor: -1, -2, -3]).data-now() is-roughly [list: 1, 2, 3] two-dim-abs = tensor-abs([tensor: -4, 5, -6, -7, -8, 9].as-2d(3, 2)) two-dim-abs.shape() is [list: 3, 2] two-dim-abs.data-now() is-roughly [list: 4, 5, 6, 7, 8, 9] end
Computes the ceiling of the Tensor, element-wise.
check: # Check usages on integer tensors: tensor-ceil([tensor: 0]).data-now() is-roughly [list: 0] tensor-ceil([tensor: 1]).data-now() is-roughly [list: 1] tensor-ceil([tensor: -1, -2, -3]).data-now() is-roughly [list: -1, -2, -3] # Check usages on float tensors: tensor-ceil([tensor: 0.3]).data-now() is-roughly [list: 1] tensor-ceil([tensor: 0.5]).data-now() is-roughly [list: 1] tensor-ceil([tensor: 0.8]).data-now() is-roughly [list: 1] tensor-ceil([tensor: -0.2]).data-now() is-roughly [list: 0] tensor-ceil([tensor: -0.5]).data-now() is-roughly [list: 0] tensor-ceil([tensor: -0.9]).data-now() is-roughly [list: 0] tensor-ceil([tensor: 3.5, 5.2, 1.6]).data-now() is-roughly [list: 4, 6, 2] end
- clip-by-value :: (
- tensor :: Tensor,
- min-value :: Number,
- max-value :: Number
- )
- -> Tensor
Clips the values of the Tensor, element-wise, such that every element in the resulting Tensor is at least min-value and is at most max-value.
min-value must be less than or equal to max-value; otherwise, the function raises an error.
check: clip-by-value([tensor: 0], 0, 0).data-now() is-roughly [list: 0] clip-by-value([tensor: 0], -1, 1).data-now() is-roughly [list: 0] clip-by-value([tensor: 0], 1, 4).data-now() is-roughly [list: 1] clip-by-value([tensor: 21, 0, 32, 2], 4, 9).data-now() is-roughly [list: 9, 4, 9, 4] clip-by-value([tensor: 3, 9, 10, 3.24], 4.5, 9.4).data-now() is-roughly [list: 4.5, 9, 9.4, 4.5] clip-by-value([tensor: 1], 10, 0) raises "minimum value to clip to must be less than or equal to the maximum" clip-by-value([tensor: 1], -10, -45) raises "minimum value to clip to must be less than or equal to the maximum" end
Applies the exponential linear units function to the Tensor, element-wise.
Alias for exponential-linear-units.
Applies the gauss error function to the Tensor, element-wise.
Alias for gauss-error.
Computes the equivalent of num-exp(tensor), element-wise.
Computes the equivalent of num-exp(tensor - 1), element-wise.
Computes the floor of the Tensor, element-wise.
check: # Check usages on integer tensors: tensor-floor([tensor: 0]).data-now() is-roughly [list: 0] tensor-floor([tensor: 1]).data-now() is-roughly [list: 1] tensor-floor([tensor: -1]).data-now() is-roughly [list: -1] tensor-floor([tensor: -1, -2, -3]).data-now() is-roughly [list: -1, -2, -3] # Check usages on float tensors: tensor-floor([tensor: 0.3]).data-now() is-roughly [list: 0] tensor-floor([tensor: 0.5]).data-now() is-roughly [list: 0] tensor-floor([tensor: 0.8]).data-now() is-roughly [list: 0] tensor-floor([tensor: 0.999]).data-now() is-roughly [list: 0] tensor-floor([tensor: 1.1]).data-now() is-roughly [list: 1] tensor-floor([tensor: -0.2]).data-now() is-roughly [list: -1] tensor-floor([tensor: -0.5]).data-now() is-roughly [list: -1] tensor-floor([tensor: -0.9]).data-now() is-roughly [list: -1] tensor-floor([tensor: 3.5, 5.2, 1.6]).data-now() is-roughly [list: 3, 5, 1] end
Applies a leaky rectified linear units function to the Tensor, element-wise.
alpha is the scaling factor for negative values. The default in TensorFlow.js is 0.2, but the argument has been exposed here for more flexibility.
Computes the natural logarithm of the Tensor, element-wise; that is, it computes the equivalent of num-log(tensor).
Computes the natural logarithm of the Tensor plus 1, element-wise; that is, it computes the equivalent of num-log(tensor + 1).
Applies the log sigmoid function to the Tensor, element-wise.
Multiplies each element in the Tensor by -1.
check: tensor-negate([tensor: 0]).data-now() is-roughly [list: 0] tensor-negate([tensor: 1]).data-now() is-roughly [list: -1] tensor-negate([tensor: -1]).data-now() is-roughly [list: 1] tensor-negate([tensor: -1, 2, 3, -4, 5]).data-now() is-roughly [list: 1, -2, -3, 4, -5] tensor-negate([tensor: -1, -2, -3, -4, -5]).data-now() is-roughly [list: 1, 2, 3, 4, 5] end
Applies a leaky rectified linear units function to the Tensor, element-wise, using parametric alphas.
alpha is the scaling factor for negative values.
Computes the reciprocal of the Tensor, element-wise; that is, it computes the equivalent of 1 / tensor.
In order to avoid division-by-zero errors, the input Tensor cannot contain 0; otherwise, the function raises an error.
check: tensor-reciprocal([tensor: 1]).data-now() is-roughly [list: 1] tensor-reciprocal([tensor: -1]).data-now() is-roughly [list: -1] tensor-reciprocal([tensor: -1, -2, -3]).data-now() is-roughly [list: ~-1, ~-0.5, ~-0.3333333] # Check for division-by-zero errors: tensor-reciprocal([tensor: 0]) raises "The argument Tensor cannot contain 0" tensor-reciprocal([tensor: 1, 0]) raises "The argument Tensor cannot contain 0" tensor-reciprocal([tensor: 7.65, 0, 1.43]) raises "The argument Tensor cannot contain 0" end
Applies a rectified linear units function to the Tensor, element-wise.
Computes the equivalent of num-round(tensor), element-wise.
Due to unavoidable precision errors on Roughnums, the behavior for numbers ending in .5 is inconsistent. See the examples below.
check: tensor-round([tensor: 0]).data-now() is-roughly [list: 0] tensor-round([tensor: 1]).data-now() is-roughly [list: 1] tensor-round([tensor: -1]).data-now() is-roughly [list: -1] tensor-round([tensor: 0.1]).data-now() is-roughly [list: 0] tensor-round([tensor: 0.3]).data-now() is-roughly [list: 0] tensor-round([tensor: 0.8]).data-now() is-roughly [list: 1] tensor-round([tensor: 0.999]).data-now() is-roughly [list: 1] tensor-round([tensor: -1, -2, -3]).data-now() is-roughly [list: -1, -2, -3] tensor-round([tensor: 3.5, 5.2, 1.6]).data-now() is-roughly [list: 4, 5, 2] # Note inconsistent behavior with rounding on Roughnums: tensor-round([tensor: 0.5]).data-now() is-roughly [list: 0] # rounds down tensor-round([tensor: 3.5]).data-now() is-roughly [list: 4] # rounds up end
Computes the recriprocal of the square root of the Tensor, element-wise.
The resulting Tensor is roughly equivalent to tensor-reciprocal(tensor-sqrt(tensor)).
In order to avoid division-by-zero errors, the input Tensor cannot contain 0; otherwise, the function raises an error.
check: reciprocal-sqrt([tensor: 1]).data-now() is-roughly [list: 1] reciprocal-sqrt([tensor: -1]).data-now() is-roughly [list: 1] reciprocal-sqrt([tensor: -1, -2, -3]).data-now() is-roughly [list: ~1, ~0.7071067, ~0.5773502] reciprocal-sqrt([tensor: 6, 2, -4]).data-now() is-roughly [list: ~0.4082482, ~0.7071067, ~0.5] # Check for division-by-zero errors: reciprocal-sqrt([tensor: 0]) raises "The argument Tensor cannot contain 0" reciprocal-sqrt([tensor: 1, 0]) raises "The argument Tensor cannot contain 0" reciprocal-sqrt([tensor: 7.65, 0, 1.43]) raises "The argument Tensor cannot contain 0" end
Applies a scaled, exponential linear units function to the Tensor, element-wise.
Applies the sigmoid function to the Tensor, element-wise.
Returns an element-wise indication of the sign of each number in the Tensor; that is, every value in the original tensor is represented in the resulting tensor as ~+1 if the value is positive, ~-1 if the value was negative, or ~0 if the value was zero or not a number.
check: signed-ones([tensor: 0]).data-now() is-roughly [list: 0] signed-ones([tensor: 1]).data-now() is-roughly [list: 1] signed-ones([tensor: 3]).data-now() is-roughly [list: 1] signed-ones([tensor: -1]).data-now() is-roughly [list: -1] signed-ones([tensor: -5]).data-now() is-roughly [list: -1] signed-ones([tensor: 9, -7, 5, -3, -1, 0]).data-now() is-roughly [list: 1, -1, 1, -1, -1, 0] end
Applies the softplus function to the Tensor, element-wise.
See https://sefiks.com/2017/08/11/softplus-as-a-neural-networks-activation-function/ for more information.
Computes the square root of the Tensor, element-wise.
All of the values in the input Tensor must be greater than or equal to 0; otherwise, the function raises an error.
check: tensor-sqrt([tensor: 0]).data-now() is-roughly [list: 0] tensor-sqrt([tensor: 1]).data-now() is-roughly [list: 1] tensor-sqrt([tensor: 4]).data-now() is-roughly [list: 2] tensor-sqrt([tensor: 9]).data-now() is-roughly [list: 3] tensor-sqrt([tensor: 25]).data-now() is-roughly [list: 5] tensor-sqrt([tensor: -1]).data-now() raises "Values in the input Tensor must be at least 0" tensor-sqrt([tensor: 9, -7, 5, -3, -1, 0, 0.5]).data-now() raises "Values in the input Tensor must be at least 0" end
Computes the square of the Tensor, element-wise.
check: tensor-square([tensor: 0]).data-now() is-roughly [list: 0] tensor-square([tensor: 1]).data-now() is-roughly [list: 1] tensor-square([tensor: 5]).data-now() is-roughly [list: 25] tensor-square([tensor: -1]).data-now() is-roughly [list: 1] tensor-square([tensor: -3]).data-now() is-roughly [list: 9] tensor-square([tensor: 9, -7, 5, -3, -1, 0, 0.5]).data-now() is-roughly [list: 81, 49, 25, 9, 1, 0, 0.25] end
Applies the unit step function to the Tensor, element-wise; that is, every value in the original tensor is represented in the resulting tensor as ~+1 if the value is positive; otherwise, it is represented as ~0.
check: step([tensor: 0]).data-now() is-roughly [list: 0] step([tensor: 1]).data-now() is-roughly [list: 1] step([tensor: 5]).data-now() is-roughly [list: 1] step([tensor: -1]).data-now() is-roughly [list: 0] step([tensor: -3]).data-now() is-roughly [list: 0] step([tensor: -1, 4, 0, 0, 15, -43, 0]).data-now() is-roughly [list: 0, 1, 0, 0, 1, 0, 0] end
3.28.2.4 Reduction Operations
Returns a new Tensor where each element is the index of the maximum values along the outermost dimension of tensor.
Returns a new Tensor where each element is the index of the minimum values along the outermost dimension of tensor.
Computes log(sum(exp(elements along the outermost dimension)).
Reduces tensor along the outermost dimension.
Reduces the input Tensor across all dimensions by computing the logical "and" of its elements.
tensor must be of type "bool"; otherwise, the function raises an error.
Reduces the input Tensor across all dimensions by computing the logical "or" of its elements.
tensor must be of type "bool"; otherwise, the function raises an error.
Returns a Tensor containing a single value that is the maximum value of all entries in tensor.
Returns a Tensor containing a single value that is the minimum value of all entries in tensor.
Returns a Tensor containing a single value that is the mean value of all entries in tensor.
Returns a Tensor containing a single value that is the sum of all entries in tensor.
3.28.2.5 Slicing and Joining Operations
Concatenates each Tensor in tensors along the given axis.
The Tensors’ ranks and types must match, and their sizes must match in all dimensions except axis.
check: concatenate([list: [tensor: 1], [tensor: 2]], 0).data-now() is-roughly [list: 1, 2] concatenate([list: [tensor: 1, 2, 3], [tensor: 4, 5, 6]], 0).data-now() is-roughly [list: 1, 2, 3, 4, 5, 6] two-dim-1 = [tensor: 1, 2, 3, 4].as-2d(2, 2) two-dim-2 = [tensor: 5, 6, 7, 8].as-2d(2, 2) concatenate([list: two-dim-1, two-dim-2], 0).data-now() is-roughly [list: 1, 2, 3, 4, 5, 6, 7, 8] concatenate([list: two-dim-1, two-dim-2], 1).data-now() is-roughly [list: 1, 2, 5, 6, 3, 4, 7, 8] end
- gather :: (
- tensor :: Tensor,
- indices :: Tensor,
- axis :: NumInteger
- )
- -> Tensor
Gathers slices from the Tensor at every index in indices along the given axis.
check: input-1 = [tensor: 1, 2, 3, 4] indices-1 = [tensor: 1, 3, 3] gather(input-1, indices-1, none).data-now() is [list: 2, 4, 4] input-2 = [tensor: 1, 2, 3, 4].as-2d(2, 2) indices-2 = [tensor: 1, 1, 0] gather(input-2, indices-2, none).data-now() is [list: 3, 4, 3, 4, 1, 2] end
Reverses the values in tensor along the specified axis.
If axes is none, the function defaults to reversing along all axes.
check: reverse([tensor: 0], none).data-now() is-roughly [list: 0] reverse([tensor: 1, 2], none).data-now() is-roughly [list: 2, 1] reverse([tensor: 1, 2, 3, 4, 5], none).data-now() is-roughly [list: 5, 4, 3, 2, 1] two-dim = [tensor: 1, 2, 3, 4, 5, 6].as-2d(3, 2) reverse(two-dim, none).data-now() is-roughly [list: 6, 5, 4, 3, 2, 1] reverse(two-dim, some([list: 0])).data-now() is-roughly [list: 5, 6, 3, 4, 1, 2] reverse(two-dim, some([list: 1])).data-now() is-roughly [list: 2, 1, 4, 3, 6, 5] end
- slice :: (
- tensor :: Tensor,
- begin :: List<NumInteger>,
- size :: Option<List<NumInteger>>
- )
- -> Tensor
Extracts a slice from tensor starting at the coordinates represented by begin. The resulting slice is of size size.
A value of -1 in size means that the resulting slice will go all the way to the end of the dimensions in the respective axis.
If the length of size is less than the rank of in tensor, the size of the rest of the axes will be implicitly set to -1. If size is none, the size of all axes will be set to -1.
check: slice([tensor: 1], [list: 0], none).data-now() is-roughly [list: 1] slice([tensor: 1, 2, 3, 4, 5], [list: 2], none).data-now() is-roughly [list: 3, 4, 5] two-dim = [tensor: 1, 2, 3, 4, 5, 6].as-2d(3, 2) slice(two-dim, [list: 2, 1], none).data-now() is-roughly [list: 6] slice(two-dim, [list: 1, 0], none).data-now() is-roughly [list: 3, 4, 5, 6] slice(two-dim, [list: 2], none) raises "number of coordinates to start the slice at must be equal to the rank" slice(two-dim, [list: 1, 0], some([list: 2, 1])).data-now() is-roughly [list: 3, 5] slice(two-dim, [list: 1, 0], some([list: 1, 2])).data-now() is-roughly [list: 3, 4] slice(two-dim, [list: 1, 0], some([list: 1])) raises "dimensions for the size of the slice at must be equal to the rank" end
- split :: (
- tensor :: Tensor,
- split-sizes :: List<NumInteger>,
- axis :: NumInteger
- )
- -> List<Tensor>
Splits tensor into sub-Tensors along the specified axis.
split-sizes represents the sizes of each output Tensor along the axis. The sum of the sizes in split-sizes must be equal to tensor.shape().get-value(axis); otherwise, an error will be raised.
check: one-dim = split([tensor: 1, 2, 3, 4], [list: 1, 1, 2], 0) one-dim.length() is 3 one-dim.get(0).data-now() is-roughly [list: 1] one-dim.get(1).data-now() is-roughly [list: 2] one-dim.get(2).data-now() is-roughly [list: 3, 4] split([tensor: 1, 2, 3, 4], [list: 1], 0) raises "sum of split sizes must match the size of the dimension" split([tensor: 1, 2, 3, 4], [list: 1, 1, 1, 1, 1], 0) raises "sum of split sizes must match the size of the dimension" end
Stacks a list of rank-R Tensors into one rank-(R + 1) Tensor along the specified axis.
Every Tensor in tensors must have the same shape and data type; otherwise, the function raises an error.
If axis is none, the operation will split along the first dimension (axis 0) by default.
check: stack([list: [tensor: 1]], 0).data-now() is-roughly [list: 1] stack([list: [tensor: 1], [tensor: 2]], 0).data-now() is-roughly [list: 1, 2] stack([list: [tensor: 1, 2], [tensor: 3, 4], [tensor: 5, 6]], 0).data-now() is-roughly [list: 1, 2, 3, 4, 5, 6] stack(empty, 0).data-now() raises "At least one Tensor must be supplied" stack([list: [tensor: 1]], 1) raises "Axis must be within the bounds of the Tensor" stack([list: [tensor: 1], [tensor: 2, 3], [tensor: 4]], 0) raises "All tensors passed to `stack` must have matching shapes" end
Constructs a new Tensor by repeating tensor the number of times given by repetitions. Each number in repetitions represents the number of replications in each dimension; that is, the first element in the list represents the number of replications along the first dimension, and so on.
Unstacks a Tensor of rank-R into a List of rank-(R - 1) Tensors along the specified axis.
If axis is none, the operation will split along the first dimension (axis 0) by default.
check: unstack([tensor: 1], 0).map({(x): x.data-now()}) is-roughly [list: [list: 1]] unstack([tensor: 1, 2], 0).map({(x): x.data-now()}) is-roughly [list: [list: 1], [list: 2]] unstack([tensor: 1, 2, 3, 4], 0).map({(x): x.data-now()}) is-roughly [list: [list: 1], [list: 2], [list: 3], [list: 4]] unstack([tensor: 1].as-scalar(), 0) raises "Tensor to be unstacked must be at least rank-1, but was rank-0" unstack([tensor: 1, 2, 3, 4], 1) raises "axis at which to unstack the Tensor must be within the bounds" end
- strided-slice :: (
- tensor :: Tensor,
- begin :: List<NumInteger>,
- end :: List<NumInteger>,
- strides :: List<Number>
- )
- -> Tensor
Extracts a strided slice of a Tensor.
Roughly speaking, this operations extracts a slice of size (end - begin) / stride from tensor. Starting at the location specified by begin, the slice continues by adding stride to the index until all dimensions are not less than end. Note that a stride can be negative, which causes a reverse slice.
3.28.3 TensorBuffers
Returns true if val is a TensorBuffer; otherwise, returns false.
check: is-tensor-buffer(make-buffer([list: 1])) is true is-tensor-buffer(make-buffer([list: 8, 4, 10])) is true is-tensor-buffer(43) is false is-tensor-buffer("not a buffer") is false is-tensor-buffer({some: "thing"}) is false end
3.28.3.1 TensorBuffer Constructors
Creates an TensorBuffer with the specified shape. The returned TensorBuffer’s values are initialized to ~0.
check: make-buffer([list: 1]).size() is 1 make-buffer([list: 1]).shape() is [list: 1] make-buffer([list: 9, 5]).size() is 45 make-buffer([list: 9, 5]).shape() is [list: 9, 5] # Check for error handling of rank-0 shapes: make-buffer(empty) raises "input shape List had zero elements" # Check for error handling of less than zero dimension sizes: make-buffer([list: 0]) raises "Cannot create TensorBuffer" make-buffer([list: -1]) raises "Cannot create TensorBuffer" make-buffer([list: 4, 5, 0, 3]) raises "Cannot create TensorBuffer" make-buffer([list: 2, -5, -1, 4]) raises "Cannot create TensorBuffer" end
3.28.3.2 TensorBuffer Methods
Returns the size of the TensorBuffer (the number of values stored in the TensorBuffer).
check: make-buffer([list: 1]).size() is 1 make-buffer([list: 4]).size() is 4 make-buffer([list: 3, 2]).size() is 6 make-buffer([list: 4, 4]).size() is 16 make-buffer([list: 4, 3, 5]).size() is 60 end
Returns a List<NumInteger> representing the shape of the TensorBuffer. Each element in the List<NumInteger> corresponds to the size in each dimension.
check: make-buffer([list: 1]).shape() is [list: 1] make-buffer([list: 4, 3]).shape() is [list: 4, 3] make-buffer([list: 2, 4, 1]).shape() is [list: 2, 4, 1] make-buffer([list: 4, 3, 5]).shape() is [list: 4, 3, 5] end
Sets the value in the TensorBuffer at the specified indicies to value.
check: test-buffer = make-buffer([list: 7]) test-buffer.set-now(-45, [list: 0]) test-buffer.set-now(9, [list: 2]) test-buffer.set-now(0, [list: 4]) test-buffer.set-now(-3.42, [list: 6]) test-buffer.get-all-now() is-roughly [list: -45, 0, 9, 0, 0, 0, -3.42] test-buffer.to-tensor().shape() is [list: 7] test-buffer.to-tensor().data-now() is-roughly [list: -45, 0, 9, 0, 0, 0, -3.42] # Check out-of-bounds coordinates: test-buffer.set-now(10, [list: -1]) raises "Coordinates must be within the bounds of the TensorBuffer's shape" test-buffer.set-now(10, [list: 8]) raises "Coordinates must be within the bounds of the TensorBuffer's shape" # Check too little coordinates: test-buffer.set-now(10, [list:]) raises "number of supplied coordinates must match the rank" # Check too many coordinates: test-buffer.set-now(10, [list: 9, 5]) raises "number of supplied coordinates must match the rank" end
Returns the value in the TensorBuffer at the specified indicies.
check: test-buffer = make-buffer([list: 7]) test-buffer.set-now(-45, [list: 0]) test-buffer.set-now(9, [list: 2]) test-buffer.set-now(0, [list: 4]) test-buffer.set-now((4 / 3), [list: 5]) test-buffer.set-now(-3.42, [list: 6]) test-buffer.get-now([list: 0]) is-roughly -45 test-buffer.get-now([list: 1]) is-roughly 0 test-buffer.get-now([list: 2]) is-roughly 9 test-buffer.get-now([list: 3]) is-roughly 0 test-buffer.get-now([list: 4]) is-roughly 0 test-buffer.get-now([list: 5]) is-roughly (4 / 3) test-buffer.get-now([list: 6]) is-roughly -3.42 end
Returns all values in the TensorBuffer.
check: one-dim-buffer = make-buffer([list: 7]) one-dim-buffer.set-now(-45, [list: 0]) one-dim-buffer.set-now(9, [list: 2]) one-dim-buffer.set-now(0, [list: 4]) one-dim-buffer.set-now((4 / 3), [list: 5]) one-dim-buffer.set-now(-3.42, [list: 6]) one-dim-buffer.get-all-now() is-roughly [list: -45, 0, 9, 0, 0, (4 / 3), -3.42] two-dim-buffer = make-buffer([list: 2, 2]) two-dim-buffer.set-now(4, [list: 0, 0]) two-dim-buffer.set-now(3, [list: 0, 1]) two-dim-buffer.set-now(2, [list: 1, 0]) two-dim-buffer.set-now(1, [list: 1, 1]) two-dim-buffer.get-all-now() is-roughly [list: 4, 3, 2, 1] end
Creates an immutable Tensor from the TensorBuffer.
check: one-dim-buffer = make-buffer([list: 7]) one-dim-buffer.set-now(-45, [list: 0]) one-dim-buffer.set-now(9, [list: 2]) one-dim-buffer.set-now(0, [list: 4]) one-dim-buffer.set-now(-3.42, [list: 6]) one-dim-buffer.to-tensor().shape() is [list: 7] one-dim-buffer.to-tensor().data-now() is-roughly [list: -45, 0, 9, 0, 0, 0, -3.42] two-dim-buffer = make-buffer([list: 2, 2]) two-dim-buffer.set-now(4, [list: 0, 0]) two-dim-buffer.set-now(3, [list: 0, 1]) two-dim-buffer.set-now(2, [list: 1, 0]) two-dim-buffer.set-now(1, [list: 1, 1]) two-dim-buffer.to-tensor().shape() is [list: 2, 2] two-dim-buffer.to-tensor().data-now() is-roughly [list: 4, 3, 2, 1] end
3.28.4 Models
Models represent a collection of Layers, and define a series of inputs and outputs. They are one of the primary abstractions used in TensorFlow, and can be trained, evaluated, and used for prediction.
There are two types of models in TensorFlow: Sequential, where the outputs of one Layer are the inputs to the next Layer, and Model, which is more generic and supports arbitrary, non-cyclic graphs of Layers.
3.28.4.1 Generic Models
Returns true if val is a Model; otherwise, returns false.
Creates a new generic Model.
3.28.4.2 Sequential Models
As a result, the first Layer passed to a Sequential model must have a defined input shape. This means that the LayerConfig used to instantiate the first Layer must have a defined input-shape or batch-input-shape parameter.
Returns true if val is a Sequential; otherwise, returns false.
Creates a new Sequential model.
Adds a Layer on top of the Sequential’s stack.
Configures and prepares the Sequential model for training and evaluation.
Compiling outfits the Sequential with an optimizer, loss, and/or metrics. Calling .fit or Calling .evaluate on an un-compiled model will raise an error.
Returns the loss value & metrics values for the model in test mode.
Loss and metrics parameters should be specified in a call to .compile before calling this method.
Generates output predictions for the input samples.
Computation is done in batches.
Returns predictions for a single batch of samples.
Trains the model for a fixed number of epochs (iterations on a dataset).
3.28.5 SymbolicTensors
They are most often encountered when building a graph of Layers for a Model that takes in some kind of unknown input.
Returns true if val is a SymbolicTensor; otherwise, returns false.
3.28.5.1 SymbolicTensor Constructors
Creates a new SymbolicTensor with the input shape, not including the batch size.
none values in the input List represent dimensions of arbitrary length.
Creates a new SymbolicTensor with the input shape, where the first element in the input List is the batch size.
none values in the input List represent dimensions of arbitrary length.
3.28.5.2 SymbolicTensor Methods
Returns the shape of the SymbolicTensor. none values in the output List represent dimensions of arbitrary length.
3.28.6 Layers
Layers will automatically take care of creating and initializing the various internal variables/weights they need to function.
Returns true if val is a Layer; otherwise, returns false.
3.28.6.1 Layer-Specific Datatypes
A LayerConfig is an Object that describes the properties of a Layer.
Every Layer can allow for different options in the LayerConfig used to construct them. Those options are specified underneath each Layer constructor. Additionally, the following options are permitted in every LayerConfig:
input-shape :: List<NumInteger>. Defines the input shape for the first layer of a model. This argument is only applicable to input layers (the first layer of a model). Only one of input-shape or batch-input-shape should be defined.
batch-input-shape :: List<NumInteger>. Defines the batch input shape for the first layer of a model. This argument is only applicable to input layers (the first layer of a model). Only one of input-shape or batch-input-shape should be defined.
batch-size :: NumInteger. If input-shape is specified, batch-size is used to construct the batch-input-shape in the form [list: batch-size, ...input-shape].
trainable :: Boolean. Whether this layer is trainable.
updatable :: Boolean. Whether the weights of this layer are updatable by a call to .fit.
All options allowed in a given Layer’s LayerConfig are optional unless otherwise stated.
"elu"
"hardSigmoid"
"linear"
"relu"
"relu6"
"selu"
"sigmoid"
"softmax"
"softplus"
"softsign"
"tanh"
"constant"
"glorotNormal"
"glorotUniform"
"heNormal"
"identity"
"leCunNormal"
"ones"
"orthogonal"
"randomNormal"
"randomUniform"
"truncatedNormal"
"varianceScaling"
"zeros"
"maxNorm"
"minMaxNorm"
"nonNeg"
"unitNorm"
"l1l2"
"channelsFirst"
"channelsLast"
"valid"
"same"
"casual"
3.28.6.2 Basic Layers
Applies an element-wise activation function to an output.
Other layers, most notably dense-layers, can also apply activation functions. This Layer can be used to extract the values before and after the activation.
In addition to the default LayerConfig options, the config passed to this constructor can also contain:
activation :: Activation. Defines the activation function to apply in this Layer.
Creates a dense (fully-connected) Layer.
This Layer implements the operation output = activation(dot(input, kernel) + bias), where activation is the element-wise activation function passed as the activation argument, kernel is a weights matrix created by the Layer, and bias is a bias vector created by the layer if the use-bias option is set to true.
In addition to the default LayerConfig options, the config passed to this constructor can also contain:
units :: NumInteger. Required parameter. A positive integer specifying the dimensionality of the output space.
activation :: Activation. Defines the activation function to apply in this Layer.
use-bias :: Boolean. Whether to apply a bias vector.
kernel-initializer :: Initializer. Initializer for the dense kernel weights matrix.
bias-initializer :: Initializer. Initializer for the bias vector.
input-dim :: NumInteger. If specified, defines input-shape as [list: input-dim].
kernel-constraint :: Constraint. Constraint for the kernel weights matrix.
bias-constraint :: Constraint. Constraint for the bias vector.
kernel-regularizer :: Regularizer. Regularizer function applied to the dense kernel weights matrix.
bias-regularizer :: Regularizer. Regularizer function applied to the bias vector.
activity-regularizer :: Regularizer. Regularizer function applied to the activation.
Applies dropout to the input.
Dropout consists of randomly setting a fraction rate of input units to 0 at each update during training time, which helps prevent overfitting. See http://www.cs.toronto.edu/~rsalakhu/papers/srivastava14a.pdf for more information.
In addition to the default LayerConfig options, the config passed to this constructor can also contain:
rate :: Number. Required parameter. Denotes the fraction of the input units to drop; must be between 0 and 1.
noise-shape :: List<NumInteger>. Integer array representing the shape of the binary dropout mask that will be multiplied with the input.
For instance, if your inputs have shape [list: batch-size, timesteps, features] and you want the dropout mask to be the same for all timesteps, you can set noise_shape to [list: batch-size, 1, features].
seed :: NumInteger. An integer to use as random seed.
Maps positive integers (indices) into dense vectors of fixed size.
The input shape of this layer is a two-dimensional Tensor with shape [list: batch-size, sequence-length].
The output shape of this layer is a three-dimensional Tensor with shape [list: batch-size, sequence-length, output-dim].
In addition to the default LayerConfig options, the config passed to this constructor can also contain:
input-dim :: NumInteger. Required parameter. Must also be a NumPositive. Denotes the size of the vocabulary; that is, the maximum integer index + 1.
output-dim :: NumInteger. Required parameter. Must also be a NumNonNegative. Dimension of the dense embedding.
embeddings-initializer :: Initializer. Initializer for embeddings matrix.
embeddings-regularizer :: Regularizer. Regularizer function applied to the embeddings matrix.
activity-regularizer :: Regularizer. Regularizer function applied to the activation.
embeddings-constraint :: Constraint. Constraint applied to the embeddings matrix.
mask-zero :: Boolean. Whether the input value 0 is a special "padding" value that should be masked out. This is useful when using recurrent layers which may take variable length input.
If set to true, then all subsequent layers in the model need to support masking or an exception will be raised. Additionally, if mask-zero is set to true, as a consequence, index 0 cannot be used in the vocabulary (that is, input-dim should equal the size the of vocabulary + 1).
input-length :: List<NumInteger>. Length of input sequences, when it is constant.
This argument is required if you are going to connect flatten-layers then dense-layers upstream, since otherwise the shape of the dense outputs cannot be computed.
Flattens the input. Does not affect the batch size.
A flatten-layer flattens each batch in its inputs to one dimension (making the output two dimensional).
The config passed to this constructor does not support any additional options other than the default LayerConfig options.
Repeats the input num-repeats times in a new dimension.
In addition to the default LayerConfig options, the config passed to this constructor can also contain:
num-repeats :: NumInteger. Required parameter. Must also be a NumPositive. Represents the number of times to repeat the input.
Reshapes an input to a certain shape.
The input shape can be arbitrary, although all dimensions in the input shape must be fixed.
The output shape is [list: batch-size, target-shape.get(0), ..., target-shape.get(i)].
In addition to the default LayerConfig options, the config passed to this constructor can also contain:
target-shape :: List<NumInteger>. The target shape; should not include the batch-size.
3.28.6.3 Convolutional Layers
A one-dimensional convolution Layer.
This layer creates a convolution kernel that is convolved with the layer input over a single spatial (or temporal) dimension to produce a Tensor of outputs.
In addition to the default LayerConfig options, the config passed to this constructor can also contain:
filters :: NumInteger. Required parameter. The dimensionality of the output space; that is, the number of filters in the convolution.
A two-dimensional convolution Layer.
This layer creates a convolution kernel that is convolved with the layer input to produce a Tensor of outputs.
In addition to the default LayerConfig options, the config passed to this constructor can also contain:
filters :: NumInteger. Required parameter. The dimensionality of the output space; that is, the number of filters in the convolution.
Transposed convolutional Layer. This is sometimes known as a "deconvolution" layer.
The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution; for example, from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution.
In addition to the default LayerConfig options, the config passed to this constructor can also contain:
filters :: NumInteger. Required parameter. The dimensionality of the output space; that is, the number of filters in the convolution.
Crops an two-dimensional input at the top, bottom, left, and right side (for example, image data).
In addition to the default LayerConfig options, the config passed to this constructor can also contain:
cropping :: {top-crop :: NumInteger, bottom-crop :: NumInteger, left-crop :: NumInteger, right-crop :: NumInteger}. Required parameter. An Object that specifies the cropping along each side of the width and the height.
data-format :: DataFormat. Format of the data, which determines the ordering of the dimensions in the inputs.
Depthwise separable two-dimensional convolution.
A depthwise separable convolution consists of performing just the first step in a depthwise spatial convolution (which acts on each input channel separately). The depth-multiplier argument controls how many output channels are generated per input channel in the depthwise step.
In addition to the default LayerConfig options, the config passed to this constructor can also contain:
kernel-size :: {width :: NumInteger, height :: NumInteger}. Required parameter. An Object that specifies the width and height of the two-dimensional convolution window.
depth-multiplier :: NumInteger. The number of depthwise convolution output channels for each input channel.
depthwise-initializer :: Initializer. Initializer for the depthwise kernel matrix.
depthwise-constraint :: Constraint. Constraint for the depthwise kernel matrix.
depthwise-regularizer :: Regularizer. Regularizer function applied to the depthwise kernel matrix.
3.28.6.4 Merge Layers
3.28.6.5 Normalization Layers
3.28.6.6 Pooling Layers
3.28.6.7 Recurrent Layers
3.28.6.8 Wrapper Layers
3.28.7 Optimizers
Optimizers eagerly compute gradients. This means that when a user provides a function that is a combination of TensorFlow operations to an Optimizer, the Optimizer automatically differentiates that function’s output with respect to its inputs.
Returns true if val is an Optimizer; otherwise, returns false.
3.28.7.1 Optimizer Constructors
There are many different types of Optimizers that use different formulas to compute gradients.
Constructs an Optimizer that uses a stochastic gradient descent algorithm, where learning-rate is the learning rate to use for the algorithm.
Constructs an Optimizer that uses a momentum gradient descent algorithm, where learning-rate is the learning rate to use for the algorithm and momentum is the momentum to use for the algorithm.
See http://proceedings.mlr.press/v28/sutskever13.pdf.
Constructs an Optimizer that uses the Adagrad algorithm, where learning-rate is the learning rate to use for the Adagrad gradient descent algorithm.
If not none, initial-accumulator is the positive, starting value for the accumulators in the Adagrad algorithm. If initial-accumulator is specified but is not positive, the function raises an error.
See http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf or http://ruder.io/optimizing-gradient-descent/index.html#adagrad.
Constructs an Optimizer that uses the Adadelta algorithm.
If not none, learning-rate is the learning rate to use for the Adamax gradient descent algorithm, rho is the learning rate decay over each update, and epsilon is a constant used to better condition the gradient updates.
See https://arxiv.org/abs/1212.5701.
Constructs an Optimizer that uses the Adam algorithm.
If not none, learning-rate is the learning rate to use for the Adamax gradient descent algorithm, beta-1 is the exponential decay rate for the first moment estimates, beta-2 is the exponential decay rate for the second moment estimates, and epsilon is a small constant for numerical stability.
See https://arxiv.org/abs/1412.6980.
Constructs an Optimizer that uses the Adamax algorithm.
If not none, learning-rate is the learning rate to use for the Adamax gradient descent algorithm, beta-1 is the exponential decay rate for the first moment estimates, beta-2 is the exponential decay rate for the second moment estimates, epsilon is a small constant for numerical stability, and decay is the learning rate decay over each update.
See https://arxiv.org/abs/1412.6980.
Constructs an Optimizer that uses RMSProp gradient descent, where learning-rate is the learning rate to use for the RMSProp gradient descent algorithm.
If not none, decay represents the discounting factor for the history/coming gradient, momentum represents the momentum to use for the RMSProp gradient descent algorithm, and epsilon is a small value to avoid division-by-zero errors.
If is-centered is true, gradients are normalized by the estimated varience of the gradient.
See these slides from the University of Toronto for a primer on RMSProp.
Note: This TensorFlow.js implementation uses plain momentum and is not the "centered" version of RMSProp.
3.28.7.2 Optimizer Methods
Executes f and minimizes the scalar output of f by computing gradients of y with with respect to the list of trainable, variable Tensors provided by variables.
f must be a thunk that returns a scalar Tensor. The method then returns the scalar Tensor produced by f.
If variables is empty, the Optimizer will default to training all trainable variables that have been instantiated.