Tensors in Machine Learning.
- Introduction
Hello Everyone. Let’s understand Tensors in Machine learning. Before starting with tensors we are here assuming that you know what machine learning is. But for brief let’s have a look at the definition of machine learning from the book “The Hundred-Page Machine Learning Book” by Andriy Burkov.
2. Understanding Tensors
Let’s take a look at tensors now that we have covered the basics. Do not be spooked by the term “tensor.” It is nothing more than a straightforward mathematical idea. Tensors should be easy to understand if you have a basic understanding of linear algebra. Tensors are higher-dimensional generalizations of scalars, vectors, and matrices. Starting with scalars, they are simply numbers, commonplace numbers like zero, one, and 23, and rational and irrational numbers like two-thirds, pi, and forth. There are no dimensions to these numbers.
On the other hand, there are vectors. Vectors can be thought of as numerical lists. The number of elements in the vector is known as the vector length, in some cases, the vector dimensions. A list of three numbers, for example, is a three-dimensional vector. The integers in the list are the coordinates of a point in space with the same number of dimensions as the list’s contents. In three-dimensional space, the vector zero, one, zero indicates a vector. Matrices are numerical tables that are organized in rows and columns. A grayscale image, for example, is a two-dimensional matrix in which each entry corresponds to the value of a pixel. It is also worth noting that a matrix can be conceived of as a collection of vectors of the same length. Each vector in the image would signify either a row or a column. Similarly, a vector can be conceived of as a collection of scalars (numbers). This recursive design, in which an object of one order may be considered as a list of objects of a previous order, allows us to organize scalars, vectors, and matrices inside the broader family of tensors.
Based on the initial assumptions, we can deduce that Tensors are nested lists of objects of the initial order, all of which are the same size. An order three tensor, for example, can be thought of as a collection of matrices with the same number of rows and columns. These mentioned tensors are of order two, and since they all have a similar/same number of rows and columns, the tensor of order three will be a cuboid of numbers, and we can discover numbers by travelling along any of the three axes (see images 2 and 3). The row, column, and depth at which a number is stored distinguish it. The concept of shape can be used to formalize this idea.
When counting along a specific axis, the shape of a tensor indicates how many objects there are. A vector, for example, has only one axis, and the number indicates the length of the vector. A matrix has two axes that indicate the rows and columns. An order three tensor includes a three-axis that indicates the number of rows, columns, and depth.
3. Tensors in Machine learning.
Remember that most machines cannot learn in the absence of data. Moreover, today’s data is frequently multidimensional. Tensors can help with machine learning by encoding multidimensional data. An image, for example, is often represented by three fields: width, height, and depth (colour). It makes perfect sense to represent it as a three-dimensional tensor. However, we frequently deal with tens of thousands of images. As a result, the fourth field, sample size, enters the picture. A collection of photos, such as the well-known MNIST dataset, may be stored in Tensor-flow as a 4D tensor. This representation makes it simple to tackle problems involving large amounts of data.
4. Element-Wise Tensor Operations
We can do element-wise arithmetic between tensors in the same way that we can with matrices. In this article, we will go through the four basic arithmetic operations.
4.1 Tensor Addition
The element-wise addition of two tensors with the exact dimensions results in a new tensor with the same dimensions where each scalar value is the element-wise addition of the scalars in the parent tensors.
p111, p121, p131 p112, p122, p132
P = (p211, p221, p231), (p112, p122, p132)q111, q121, q131 q112, q122, q132
Q = (q211, q221, q231), (q112, q122, q132)D= P+ Q
p111 + q111, p121 + q121, p131 + q131 p112 + q112, p122 + q122, p132 + q132
D= (p211 + q211, p221 + q221, p231 + q231), (p112 + q112, p122 + q122, p132 + q132)
In the case of NumPy, by adding arrays we can add tensors.
from numpy import array
P= array([
[[1,2,3], [4,5,6], [7,8,9]],
[[11,12,13], [14,15,16], [17,18,19]],
[[21,22,23], [24,25,26], [27,28,29]],
])
Q = array([
[[1,2,3], [4,5,6], [7,8,9]],
[[11,12,13], [14,15,16], [17,18,19]],
[[21,22,23], [24,25,26], [27,28,29]],
])
D = P + Q
print(D)
4.2 Tensor Subtraction
When one tensor is subtracted from another tensor of the exact dimensions element by element, a new tensor of the exact dimensions is produced, with each scalar value representing the element-wise subtraction of the scalars in the parent/main tensors.
p111, p121, p131 p112, p122, p132
P= (p211, p221, p231), (p112, p122, p132)q111, q121, q131 q112, q122, q132
Q = (q211, q221, q231), (q112, q122, q132)D = P— Q
p111 — q111, p121 — q121, p131 — q131 p112 — q112, p122 — q122, p132 — q132
D = (p211 — q211, p221 — q221, p231 — q231), (p112 — q112, p122 — q122, p132 — q132)
In the case of NumPy, by subtracting arrays we can subtract tensors.
from numpy import array
P = array([
[[1,2,3], [4,5,6], [7,8,9]],
[[11,12,13], [14,15,16], [17,18,19]],
[[21,22,23], [24,25,26], [27,28,29]],
])
Q= array([
[[1,2,3], [4,5,6], [7,8,9]],
[[11,12,13], [14,15,16], [17,18,19]],
[[21,22,23], [24,25,26], [27,28,29]],
])
D= P—Q
print(D)
4.3 Tensor Division
When one tensor is element-wise divided by another tensor with the same dimensions, a new tensor with the same dimensions is created, with each scalar value representing the element-wise division of the scalars in the parent tensors.
p111, p121, p131 p112, p122, p132
P = (p211, p221, p231), (p112, p122, p132)q111, q121, q131 q112, q122, q132
Q = (q211, q221, q231), (q112, q122, q132)D = P/ Q
p111 / q111, p121 / q121, p131 / q131 p112 / q112, p122 / q122, p132 / q132
D = (p211 / q211, p221 / q221, p231 / q231), (p112 / q112, p122 / q122, p132 / q132)
In the case of NumPy, by divide arrays, we can divide tensors.
from numpy import array
P = array([
[[1,2,3], [4,5,6], [7,8,9]],
[[11,12,13], [14,15,16], [17,18,19]],
[[21,22,23], [24,25,26], [27,28,29]],
])
Q= array([
[[1,2,3], [4,5,6], [7,8,9]],
[[11,12,13], [14,15,16], [17,18,19]],
[[21,22,23], [24,25,26], [27,28,29]],
])
D= P/Q
print(D)
4.4 Tensor Product
The tensor product operator is frequently represented graphically as a circle with a small y in the centre. We will refer to it as “(y)” here.
Let’s assume that we have a tensor P with p dimensions and a tensor Q with q dimensions, then we will conclude that the combination of these mentioned tensors will result in a new tensor will be in the order of either q + p or, p+ q dimensions.
The tensor product does not have to be done on tensors; it may also be implemented on matrices and vectors, which might be an excellent location to develop an understanding of higher dimensions.
Let’s have a look at a tensor product for the vectors.
p = (p1, p2)
q= (q1, q2)
d = p(x) q
p1 * [q1, q2]
d = (p2 * [q1, q2])Or, unrolled:
p1 * q1, p1 * q2
d= (p2 * q1, p2 * q2)Let’s have a look at tensor product for the matrices.
p11, p12
P = (p21, p22)q11, q12
Q = (q21, q22)D= P(x) Q
q11, q12 q11, q12
p11 * (q21, q22), p12 * (q21, q22)
D= [ q11, q12 q11, q12 ]
p21 * (q21, q22), p22 * (q21, q22)Or, unrolled:
p11 * q11, a11 * q12, p12 * q11, p12 * q12
p11 * q21, p11 * q22, p12 * q21, p12 * q22
D= (p21 * q11, p21 * q12, p22 * q11, p22 * q12)
p21 * q21, p21 * q22, p22 * q21, p22 * q22
NumPy’s tensordot() function may be used to do tensor product calculations.
The function accepts two tensors to multiply and an axis over which to total the results referred to as the sum reduction. In NumPy, the axis must be set to zero to compute the tensor product, commonly known as the tensor dot product.
In the example below, we define two order-1 tensors (vectors) with and calculate the tensor product.
from numpy import array
from numpy import tensordot
P = array([1,2])
Q = array([3,4])
D= tensordot(P, Q, axes=0)
print(D)
The tensor product is the most frequent type of tensor multiplication, but there are many more, such as the tensor dot product and the tensor contraction.
5. How Tensors are used in programming?
Tensors in mathematics are not the same as Tensors in programming. They just inherit some of their characteristics. They borrow some of their methods for depiction.
They are simply expressed in this case as arrays of arrays or lists of lists. There are many methods to modify this representation in machine learning, and they do not have to follow the rigorous coordination transformation rules established by mathematics and physics. As a result, it cannot be regarded a perfect substitute for mathematical tensors but does inherit some of their mathematical characteristics.
Each tensor is represented in Tensorflow by a shape property. Each tensor has a shape(x,y), where x denotes the length of the tensor and y denotes the dimension of the matrices or list/array contained within the tensor in this case. It must be the same for each list/array included inside.
For instance, in the majority of recommender systems, model-based Collaborative Filtering methods such as Matrix Factorization do not allow for the incorporation of contextual information into the models. Researchers created a more robust model that offers context-aware suggestions by utilizing an N-dimensional tensor rather than the conventional two-dimensional User-Item matrix. Increased computer power in recent years also enables the realization of these computationally intensive tensor operations.
6. Why sudden fascination for tensors in machine learning and deep learning?
Tensors are represented using matrices. It simplifies the process of storing data in an array. Consider the following picture with a resolution of Y x Y. The image’s pixel data may be expressed in an array. The same may be true for video frames. The representation gets much more straightforward. Thus, the critical takeaway is that we may get an accurate representation of nearly identical data to the natural representation of those things.
This is why tensor-flow is popular among developers and is being adopted by many other businesses, even though many alternative frameworks are available.
Conclusion
That concludes my discussion on tensors. These are fundamental concepts that are beneficial when working with tensors. I hope this article assisted you in broadening your perspective. Many thanks for taking the time to read this post!!