Basic Usage
In the following example, we demonstrate the einsum notation for basic tensor operations.
Einsum notation
To specify the operation, the user can either use the @ein_str-string literal or the EinCode object. For example, both the following code snippets define the matrix multiplication operation:
julia> using OMEinsumjulia> code1 = ein"ij,jk -> ik" # the string literalij, jk -> ikjulia> ixs = [[1, 2], [2, 3]] # the input indices2-element Vector{Vector{Int64}}: [1, 2] [2, 3]julia> iy = [1, 3] # the output indices2-element Vector{Int64}: 1 3julia> code2 = EinCode(ixs, iy) # the EinCode object (equivalent to the string literal)1∘2, 2∘3 -> 1∘3
The @ein_str macro can be used to define the einsum notation directly in the function call.
julia> A, B = randn(2, 3), randn(3, 4);julia> code1(A, B) # matrix multiplication2×4 Matrix{Float64}: 1.99806 -3.71491 -2.09126 2.53882 -2.15289 -0.764043 -0.262073 0.890841julia> size_dict = OMEinsum.get_size_dict(getixsv(code1), (A, B)) # get the size of the labelsDict{Char, Int64} with 3 entries: 'j' => 3 'i' => 2 'k' => 4julia> einsum(code1, (A, B), size_dict) # lower-level function2×4 Matrix{Float64}: 1.99806 -3.71491 -2.09126 2.53882 -2.15289 -0.764043 -0.262073 0.890841julia> einsum!(code1, (A, B), zeros(2, 4), true, false, size_dict) # the in-place operation2×4 Matrix{Float64}: 1.99806 -3.71491 -2.09126 2.53882 -2.15289 -0.764043 -0.262073 0.890841julia> @ein C[i,k] := A[i,j] * B[j,k] # all-in-one macro2×4 Matrix{Float64}: 1.99806 -3.71491 -2.09126 2.53882 -2.15289 -0.764043 -0.262073 0.890841
Here, we show that the @ein macro combines the einsum notation defintion and the operation in a single line, which is more convenient for simple operations. Separating the einsum notation and the operation (the first approach) can be useful for reusing the einsum notation for multiple input tensors. Lower level functions, einsum and einsum!, can be used for more control over the operation.
For more than two input tensors, the @ein_str macro does not optimize the contraction order. In such cases, the user can use the @optein_str string literal to optimize the contraction order or specify the contraction order manually.
julia> tensors = [randn(100, 100) for _ in 1:4];julia> optein"ij,jk,kl,lm->im"(tensors...) # optimized contraction (without knowing the size)100×100 Matrix{Float64}: -1067.92 645.948 408.753 … -1005.07 -648.995 -983.512 2225.2 877.073 212.699 -633.868 695.908 774.884 -2523.87 -1416.2 -437.917 763.039 1436.28 -987.12 -215.305 -458.901 87.841 -325.236 -948.838 1116.59 976.712 797.583 1112.49 294.569 -940.402 -1.77171 -1006.66 -504.297 188.721 … 683.267 462.149 427.432 987.853 1293.45 -816.224 -1227.48 523.425 -345.639 -432.155 -1553.75 -297.474 1621.66 1064.33 166.14 -109.058 -387.933 -535.823 426.524 77.0437 -714.075 -1591.43 305.712 -669.233 -475.424 1175.55 220.82 ⋮ ⋱ 424.486 -805.419 359.626 -862.091 -128.857 -802.352 -1522.56 -994.825 -196.899 97.8683 247.345 885.577 2122.58 720.571 -648.923 893.851 -1479.88 -127.132 -1353.83 1249.91 221.358 -278.54 -29.4056 110.081 63.7799 -765.896 1241.86 … -539.398 -683.028 -71.298 772.876 -438.546 -1103.2 1291.04 -457.393 -643.109 -1478.05 -739.545 -499.857 -449.021 -929.969 -1089.47 -433.29 -978.171 215.151 -574.032 -1426.57 -535.495 -860.211 131.707 -408.818 1086.58 874.124 -481.461julia> ein"(ij,jk),(kl,lm)->im"(tensors...) # manually specified contraction100×100 Matrix{Float64}: -1067.92 645.948 408.753 … -1005.07 -648.995 -983.512 2225.2 877.073 212.699 -633.868 695.908 774.884 -2523.87 -1416.2 -437.917 763.039 1436.28 -987.12 -215.305 -458.901 87.841 -325.236 -948.838 1116.59 976.712 797.583 1112.49 294.569 -940.402 -1.77171 -1006.66 -504.297 188.721 … 683.267 462.149 427.432 987.853 1293.45 -816.224 -1227.48 523.425 -345.639 -432.155 -1553.75 -297.474 1621.66 1064.33 166.14 -109.058 -387.933 -535.823 426.524 77.0437 -714.075 -1591.43 305.712 -669.233 -475.424 1175.55 220.82 ⋮ ⋱ 424.486 -805.419 359.626 -862.091 -128.857 -802.352 -1522.56 -994.825 -196.899 97.8683 247.345 885.577 2122.58 720.571 -648.923 893.851 -1479.88 -127.132 -1353.83 1249.91 221.358 -278.54 -29.4056 110.081 63.7799 -765.896 1241.86 … -539.398 -683.028 -71.298 772.876 -438.546 -1103.2 1291.04 -457.393 -643.109 -1478.05 -739.545 -499.857 -449.021 -929.969 -1089.47 -433.29 -978.171 215.151 -574.032 -1426.57 -535.495 -860.211 131.707 -408.818 1086.58 874.124 -481.461
Sometimes, manually optimizing the contraction order can be beneficial. Please check Contraction order optimization for more details.
Einsum examples
We first define the tensors and then demonstrate the einsum notation for various tensor operations.
julia> using OMEinsumjulia> s = fill(1) # scalar0-dimensional Array{Int64, 0}: 1julia> w, v = [1, 2], [4, 5]; # vectorsjulia> A, B = [1 2; 3 4], [5 6; 7 8]; # matricesjulia> T1, T2 = reshape(1:8, 2, 2, 2), reshape(9:16, 2, 2, 2); # 3D tensor
Unary examples
julia> ein"i->"(w) # sum of the elements of a vector.0-dimensional Array{Int64, 0}: 3julia> ein"ij->i"(A) # sum of the rows of a matrix.2-element Vector{Int64}: 3 7julia> ein"ii->"(A) # sum of the diagonal elements of a matrix, i.e., the trace.0-dimensional Array{Int64, 0}: 5julia> ein"ij->"(A) # sum of the elements of a matrix.0-dimensional Array{Int64, 0}: 10julia> ein"i->ii"(w) # create a diagonal matrix.2×2 Matrix{Int64}: 1 0 0 2julia> ein"i->ij"(w; size_info=Dict('j'=>2)) # repeat a vector to form a matrix.2×2 Matrix{Int64}: 1 1 2 2julia> ein"ijk->ikj"(T1) # permute the dimensions of a tensor.2×2×2 Array{Int64, 3}: [:, :, 1] = 1 5 2 6 [:, :, 2] = 3 7 4 8
Binary examples
julia> ein"ij, jk -> ik"(A, B) # matrix multiplication.2×2 Matrix{Int64}: 19 22 43 50julia> ein"ijb,jkb->ikb"(T1, T2) # batch matrix multiplication.2×2×2 Array{Int64, 3}: [:, :, 1] = 39 47 58 70 [:, :, 2] = 163 187 190 218julia> ein"ij,ij->ij"(A, B) # element-wise multiplication.2×2 Matrix{Int64}: 5 12 21 32julia> ein"ij,ij->"(A, B) # sum of the element-wise multiplication.0-dimensional Array{Int64, 0}: 70julia> ein"ij,->ij"(A, s) # element-wise multiplication by a scalar.2×2 Matrix{Int64}: 1 2 3 4
Nary examples
julia> optein"ai,aj,ak->ijk"(A, A, B) # star contraction.2×2×2 Array{Int64, 3}: [:, :, 1] = 68 94 94 132 [:, :, 2] = 78 108 108 152julia> optein"ia,ajb,bkc,cld,dm->ijklm"(A, T1, T2, T1, A) # tensor train contraction.2×2×2×2×2 Array{Int64, 5}: [:, :, 1, 1, 1] = 9500 14564 21604 33420 [:, :, 2, 1, 1] = 11084 17012 25204 39036 [:, :, 1, 2, 1] = 13644 20916 31028 47996 [:, :, 2, 2, 1] = 15932 24452 36228 56108 [:, :, 1, 1, 2] = 13214 20258 30050 46486 [:, :, 2, 1, 2] = 15414 23658 35050 54286 [:, :, 1, 2, 2] = 19430 29786 44186 68350 [:, :, 2, 2, 2] = 22686 34818 51586 79894
Computation Backends
OMEinsum supports multiple backends for tensor contractions. The backend determines how the underlying computation is performed.
Available Backends
| Backend | Description | Best For |
|---|---|---|
DefaultBackend() | BLAS/CUBLAS via reshape/permute | General use, matrix operations |
CuTensorBackend() | NVIDIA cuTENSOR | GPU tensor network contractions |
Changing Backends
julia> get_einsum_backend() # check current backendDefaultBackend()julia> set_einsum_backend!(DefaultBackend()) # set to defaultDefaultBackend()
For GPU acceleration with cuTENSOR, see CUDA Acceleration.