OMEinsum.DynamicEinCodeType
DynamicEinCode{LT}
DynamicEinCode(ixs, iy)

Wrapper to eincode-specification that creates a callable object to evaluate the eincode ixs -> iy where ixs are the index-labels of the input-tensors and iy are the index-labels of the output.

example

julia> a, b = rand(2,2), rand(2,2);

julia> OMEinsum.DynamicEinCode((('i','j'),('j','k')),('i','k'))(a, b) ≈ a * b
true
source
OMEinsum.DynamicNestedEinsumType
DynamicNestedEinsum{LT} <: NestedEinsum{LT}
DynamicNestedEinsum(args, eins)
DynamicNestedEinsum{LT}(tensorindex::Int)

Einsum with contraction order, where the type parameter LT is the label type. It has two constructors. One takes a tensorindex as input, which represents the leaf node in a contraction tree. The other takes an iterable of type DynamicNestedEinsum, args, as the siblings, and eins to specify the contraction operation.

source
OMEinsum.EinArrayType
EinArray{T, N, TT, LX, LY, ICT, OCT} <: AbstractArray{T, N}

A struct to hold the intermediate result of an einsum where all index-labels of both input and output are expanded to a rank-N-array whose values are lazily calculated. Indices are arranged as inner indices (or reduced dimensions) first and then outer indices.

Type parameters are

* `T`: element type,
* `N`: array dimension,
* `TT`: type of "tuple of input arrays",
* `LX`: type of "tuple of input indexers",
* `LX`: type of output indexer,
* `ICT`: typeof inner CartesianIndices,
* `OCT`: typeof outer CartesianIndices,
source
OMEinsum.EinCodeType
EinCode <: AbstractEinsum
EinCode(ixs, iy)

Abstract type for sum-product contraction code. The constructor returns a DynamicEinCode instance.

source
OMEinsum.EinIndexerType
EinIndexer{locs,N}

A structure for indexing EinArrays. locs is the index positions (among all indices). In the constructor, size is the size of target tensor,

source
OMEinsum.EinIndexerMethod
EinIndexer{locs}(size::Tuple)

Constructor for EinIndexer for an object of size size where locs are the locations of relevant indices in a larger tuple.

source
OMEinsum.IndexGroupType
IndexGroup

Leaf in a contractiontree, contains the indices and the number of the tensor it describes, e.g. in "ij,jk -> ik", indices "ik" belong to tensor 1, so would be described by IndexGroup(['i','k'], 1).

source
OMEinsum.NestedEinsumConstructorType
NestedEinsumConstructor

describes a (potentially) nested einsum. Important fields:

  • args, vector of all inputs, either IndexGroup objects corresponding to tensors or NestedEinsumConstructor
  • iy, indices of output
source
OMEinsum.StaticEinCodeType
StaticEinCode{LT, ixs, iy}

The static version of DynamicEinCode that matches the contraction rule at compile time. It is the default return type of @ein_str macro. LT is the label type.

source
OMEinsum.StaticNestedEinsumType
StaticNestedEinsum{LT,args,eins} <: NestedEinsum{LT}
StaticNestedEinsum(args, eins)
StaticNestedEinsum{LT}(tensorindex::Int)

Einsum with contraction order, where the type parameter LT is the label type, args is a tuple of StaticNestedEinsum, eins is a StaticEinCode and leaf node is defined by setting eins to an integer. It has two constructors. One takes a tensorindex as input, which represents the leaf node in a contraction tree. The other takes an iterable of type DynamicNestedEinsum, args, as the siblings, and eins to specify the contraction operation.

source
Base.getindexMethod
getindex(A::EinArray, inds...)

return the lazily calculated entry of A at index inds.

source
OMEinsum.allow_loopsMethod
allow_loops(flag::Bool)

Setting this to false will cause OMEinsum to log an error if it falls back to loop_einsum evaluation, instead of calling specialised kernels. The default is true.

source
OMEinsum.alluniqueMethod
allunique(ix::Tuple)

return true if all elements of ix appear only once in ix.

example

julia> using OMEinsum: allunique

julia> allunique((1,2,3,4))
true

julia> allunique((1,2,3,1))
false
source
OMEinsum.asarrayMethod
asarray(x[, parent::AbstractArray]) -> AbstactArray

Return a 0-dimensional array with item x, otherwise, do nothing. If a parent is supplied, it will try to match the parent array type.

source
OMEinsum.einarrayMethod
einarray(::Val{ixs}, Val{iy}, xs, size_dict) -> EinArray

Constructor of EinArray from an EinCode, a tuple of tensors xs and a size_dict that assigns each index-label a size. The returned EinArray holds an intermediate result of the einsum specified by the EinCode with indices corresponding to all unique labels in the einsum. Reduction over the (lazily calculated) dimensions that correspond to labels not present in the output lead to the result of the einsum.

example

julia> using OMEinsum: get_size_dict

julia> a, b = rand(2,2), rand(2,2);

julia> sd = get_size_dict((('i','j'),('j','k')), (a, b));

julia> ea = OMEinsum.einarray(Val((('i','j'),('j','k'))),Val(('i','k')), (a,b), sd);

julia> dropdims(sum(ea, dims=1), dims=1) ≈ a * b
true
source
OMEinsum.einsumFunction
einsum(code::EinCode, xs, size_dict)
einsum(rule, ixs, iy, xs, size_dict)

return the tensor that results from contracting the tensors xs according to their indices ixs (getixs(code)), where all indices that do not appear in the output iy (getiy(code)) are summed over. The result is permuted according to out.

  • ixs - tuple of tuples of index-labels of the input-tensors xs

  • iy - tuple of index-labels of the output-tensor

  • xs - tuple of tensors

  • size_dict - a dictionary that maps index-labels to their sizes

example

julia> a, b = rand(2,2), rand(2,2);

julia> einsum(EinCode((('i','j'),('j','k')),('i','k')), (a, b)) ≈ a * b
true

julia> einsum(EinCode((('i','j'),('j','k')),('k','i')), (a, b)) ≈ permutedims(a * b, (2,1))
true
source
OMEinsum.einsum_gradMethod
einsum_grad(ixs, xs, iy, size_dict, cdy, i)

return the gradient of the result of evaluating the EinCode w.r.t the ith tensor in xs. cdy is the result of applying the EinCode to the xs.

example

julia> using OMEinsum: einsum_grad, get_size_dict

julia> a, b = rand(2,2), rand(2,2);

julia> c = einsum(EinCode((('i','j'),('j','k')), ('i','k')), (a,b));

julia> sd = get_size_dict((('i','j'),('j','k')), (a,b));

julia> einsum_grad((('i','j'),('j','k')), (a,b), ('i','k'), sd, c, 1) ≈ c * transpose(b)
true
source
OMEinsum.filliys!Method
filliys!(neinsum::NestedEinsumConstructor)

goes through all NestedEinsumConstructor objects in the tree and saves the correct iy in them.

source
OMEinsum.get_size_dict!Method
get_size_dict!(ixs, xs, size_info)

return a dictionary that is used to get the size of an index-label in the einsum-specification with input-indices ixs and tensors xs after consistency within ixs and between ixs and xs has been verified.

source
OMEinsum.getixsvMethod
getixsv(code)

Get labels of input tensors for EinCode, NestedEinsum and some other einsum like objects. Returns a vector of vectors.

julia> getixsv(ein"(ij,jk),k->i")
3-element Vector{Vector{Char}}:
 ['i', 'j']
 ['j', 'k']
 ['k']
source
OMEinsum.getiyvMethod
getiy(code)

Get labels of the output tensor for EinCode, NestedEinsum and some other einsum like objects. Returns a vector.

julia> getiyv(ein"(ij,jk),k->i")
1-element Vector{Char}:
 'i': ASCII/Unicode U+0069 (category Ll: Letter, lowercase)
source
OMEinsum.indices_and_locsMethod
indices_and_locs(ixs,iy)

given the index-labels of input and output of an einsum, return (in the same order):

  • a tuple of the distinct index-labels of the output iy
  • a tuple of the distinct index-labels in ixs of the input not appearing in the output iy
  • a tuple of tuples of locations of an index-label in the ixs in a list of all index-labels
  • a tuple of locations of index-labels in iy in a list of all index-labels

where the list of all index-labels is simply the first and the second output catenated and the second output catenated.

source
OMEinsum.loop_einsum!Method
loop_einsum!(ixs, iy, xs, y, sx, sy, size_dict)

inplace-version of loop_einsum, saving the result in a preallocated tensor of correct size y.

source
OMEinsum.loop_einsumMethod
loop_einsum(::EinCode, xs, size_dict)

evaluates the eincode specified by EinCode and the tensors xs by looping over all possible indices and calculating the contributions ot the result. Scales exponentially in the number of distinct index-labels.

source
OMEinsum.map_prodMethod
map_prod(xs, ind, indexers)

calculate the value of an EinArray with EinIndexers indexers at location ind.

source
OMEinsum.match_ruleMethod
match_rule(ixs, iy)
match_rule(code::EinCode)

Returns the rule that matches, otherwise use DefaultRule - the slow loop_einsum backend.

source
OMEinsum.nopermuteMethod
nopermute(ix,iy)

check that all values in iy that are also in ix have the same relative order,

example

julia> using OMEinsum: nopermute

julia> nopermute((1,2,3),(1,2))
true

julia> nopermute((1,2,3),(2,1))
false

e.g. nopermute((1,2,3),(1,2)) is true while nopermute((1,2,3),(2,1)) is false

source
OMEinsum.parse_parensMethod
parse_parens(s::AbstractString, i, narg)

parse one level of parens starting at index i where narg counts which tensor the current group of indices, e.g. "ijk", belongs to. Recursively calls itself for each new opening paren that's opened.

source
OMEinsum.@ein!Macro
@ein! A[i,k] := B[i,j] * C[j,k]     # A = B * C
@ein! A[i,k] += B[i,j] * C[j,k]     # A += B * C

Macro interface similar to that of other packages.

Inplace version of @ein.

example

julia> a, b, c, d = rand(2,2), rand(2,2), rand(2,2), zeros(2,2);

julia> cc = copy(c);

julia> @ein! d[i,k] := a[i,j] * b[j,k];

julia> d ≈ a * b
true

julia> d ≈ ein"ij,jk -> ik"(a,b)
true

julia> @ein! c[i,k] += a[i,j] * b[j,k];

julia> c ≈ cc + a * b
true
source
OMEinsum.@einMacro
@ein A[i,k] := B[i,j] * C[j,k]     # A = B * C

Macro interface similar to that of other packages.

You may use numbers in place of letters for dummy indices, as in @tensor, and need not name the output array. Thus A = @ein [1,2] := B[1,ξ] * C[ξ,2] is equivalent to the above. This can also be written A = ein"ij,jk -> ik"(B,C) using the numpy-style string macro.

example

julia> a, b = rand(2,2), rand(2,2);

julia> @ein c[i,k] := a[i,j] * b[j,k];

julia> c ≈ a * b
true

julia> c ≈ ein"ij,jk -> ik"(a,b)
true
source
OMEinsum.@ein_strMacro
ein"ij,jk -> ik"(A,B)

String macro interface which understands numpy.einsum's notation. Translates strings into StaticEinCode-structs that can be called to evaluate an einsum. To control evaluation order, use parentheses - instead of an EinCode, a NestedEinsum is returned which evaluates the expression according to parens. The valid character ranges for index-labels are a-z and α-ω.

example

julia> a, b, c = rand(10,10), rand(10,10), rand(10,1);

julia> ein"ij,jk,kl -> il"(a,b,c) ≈ ein"(ij,jk),kl -> il"(a,b,c) ≈ a * b * c
true
source
OMEinsumContractionOrders.GreedyMethodType
GreedyMethod{MT}
GreedyMethod(; method=MinSpaceOut(), nrepeat=10)

The fast but poor greedy optimizer. Input arguments are

  • method is MinSpaceDiff() or MinSpaceOut.
    • MinSpaceOut choose one of the contraction that produces a minimum output tensor size,
    • MinSpaceDiff choose one of the contraction that decrease the space most.
  • nrepeat is the number of repeatition, returns the best contraction order.
source
OMEinsumContractionOrders.KaHyParBipartiteType
KaHyParBipartite{RT,IT,GM}
KaHyParBipartite(; sc_target, imbalances=collect(0.0:0.005:0.8),
    max_group_size=40, greedy_config=GreedyMethod())

Optimize the einsum code contraction order using the KaHyPar + Greedy approach. This program first recursively cuts the tensors into several groups using KaHyPar, with maximum group size specifed by max_group_size and maximum space complexity specified by sc_target, Then finds the contraction order inside each group with the greedy search algorithm. Other arguments are

  • sc_target is the target space complexity, defined as log2(number of elements in the largest tensor),
  • imbalances is a KaHyPar parameter that controls the group sizes in hierarchical bipartition,
  • max_group_size is the maximum size that allowed to used greedy search,
  • greedy_config is a greedy optimizer.

References

source
OMEinsumContractionOrders.MergeGreedyType
MergeGreedy <: CodeSimplifier
MergeGreedy(; threshhold=-1e-12)

Contraction code simplifier (in order to reduce the time of calling optimizers) that merges tensors greedily if the space complexity of merged tensors is reduced (difference smaller than the threshhold).

source
OMEinsumContractionOrders.SABipartiteType
SABipartite{RT,BT}
SABipartite(; sc_target=25, ntrials=50, βs=0.1:0.2:15.0, niters=1000
    max_group_size=40, greedy_config=GreedyMethod(), initializer=:random)

Optimize the einsum code contraction order using the Simulated Annealing bipartition + Greedy approach. This program first recursively cuts the tensors into several groups using simulated annealing, with maximum group size specifed by max_group_size and maximum space complexity specified by sc_target, Then finds the contraction order inside each group with the greedy search algorithm. Other arguments are

  • size_dict, a dictionary that specifies leg dimensions,
  • sc_target is the target space complexity, defined as log2(number of elements in the largest tensor),
  • max_group_size is the maximum size that allowed to used greedy search,
  • βs is a list of inverse temperature 1/T,
  • niters is the number of iteration in each temperature,
  • ntrials is the number of repetition (with different random seeds),
  • greedy_config configures the greedy method,
  • initializer, the partition configuration initializer, one can choose :random or :greedy (slow but better).

References

source
OMEinsumContractionOrders.TreeSAType
TreeSA{RT,IT,GM}
TreeSA(; sc_target=20, βs=collect(0.01:0.05:15), ntrials=10, niters=50,
    sc_weight=1.0, rw_weight=0.2, initializer=:greedy, greedy_config=GreedyMethod(; nrepeat=1))

Optimize the einsum contraction pattern using the simulated annealing on tensor expression tree.

  • sc_target is the target space complexity,
  • ntrials, βs and niters are annealing parameters, doing ntrials indepedent annealings, each has inverse tempteratures specified by βs, in each temperature, do niters updates of the tree.
  • sc_weight is the relative importance factor of space complexity in the loss compared with the time complexity.
  • rw_weight is the relative importance factor of memory read and write in the loss compared with the time complexity.
  • initializer specifies how to determine the initial configuration, it can be :greedy or :random. If it is using :greedy method to generate the initial configuration, it also uses two extra arguments greedy_method and greedy_nrepeat.
  • nslices is the number of sliced legs, default is 0.
  • fixed_slices is a vector of sliced legs, default is [].

References

source
OMEinsumContractionOrders.contraction_complexityMethod
contraction_complexity(eincode, size_dict) -> ContractionComplexity

Returns the time, space and read-write complexity of the einsum contraction. The returned object contains 3 fields:

  • time complexity tc defined as log2(number of element-wise multiplications).
  • space complexity sc defined as log2(size of the maximum intermediate tensor).
  • read-write complexity rwc defined as log2(the number of read-write operations).
source
OMEinsumContractionOrders.flopMethod
flop(eincode, size_dict) -> Int

Returns the number of iterations, which is different with the true floating point operations (FLOP) by a factor of 2.

source
OMEinsumContractionOrders.label_elimination_orderMethod
label_elimination_order(code) -> Vector

Returns a vector of labels sorted by the order they are eliminated in the contraction tree. The contraction tree is specified by code, which e.g. can be a NestedEinsum instance.

source
OMEinsumContractionOrders.optimize_codeFunction
optimize_code(eincode, size_dict, optimizer = GreedyMethod(), simplifier=nothing, permute=true) -> optimized_eincode

Optimize the einsum contraction code and reduce the time/space complexity of tensor network contraction. Returns a NestedEinsum instance. Input arguments are

  • eincode is an einsum contraction code instance, one of DynamicEinCode, StaticEinCode or NestedEinsum.
  • size is a dictionary of "edge label=>edge size" that contains the size information, one can use uniformsize(eincode, 2) to create a uniform size.
  • optimizer is a CodeOptimizer instance, should be one of GreedyMethod, KaHyParBipartite, SABipartite or TreeSA. Check their docstrings for details.
  • simplifier is one of MergeVectors or MergeGreedy.
  • optimize the permutation if permute is true.

Examples

julia> using OMEinsum

julia> code = ein"ij, jk, kl, il->"
ij, jk, kl, il -> 
julia> optimize_code(code, uniformsize(code, 2), TreeSA())
SlicedEinsum{Char, NestedEinsum{DynamicEinCode{Char}}}(Char[], ki, ki -> 
├─ jk, ij -> ki
│  ├─ jk
│  └─ ij
└─ kl, il -> ki
   ├─ kl
   └─ il
)
source
OMEinsumContractionOrders.optimize_greedyMethod
optimize_greedy(eincode, size_dict; method=MinSpaceOut(), nrepeat=10)

Greedy optimizing the contraction order and return a NestedEinsum object. Methods are

  • MinSpaceOut, always choose the next contraction that produces the minimum output tensor.
  • MinSpaceDiff, always choose the next contraction that minimizes the total space.
source
OMEinsumContractionOrders.optimize_kahyparMethod
optimize_kahypar(code, size_dict; sc_target, max_group_size=40, imbalances=0.0:0.01:0.2, greedy_method=MinSpaceOut(), greedy_nrepeat=10)

Optimize the einsum code contraction order using the KaHyPar + Greedy approach. size_dict is a dictionary that specifies leg dimensions. Check the docstring of KaHyParBipartite for detailed explaination of other input arguments.

source
OMEinsumContractionOrders.optimize_kahypar_autoMethod
optimize_kahypar_auto(code, size_dict; max_group_size=40, greedy_method=MinSpaceOut(), greedy_nrepeat=10)

Find the optimal contraction order automatically by determining the sc_target with bisection. It can fail if the tree width of your graph is larger than 100.

source
OMEinsumContractionOrders.optimize_saMethod
optimize_sa(code, size_dict; sc_target, max_group_size=40, βs=0.1:0.2:15.0, niters=1000, ntrials=50,
        greedy_method=MinSpaceOut(), greedy_nrepeat=10, initializer=:random)

Optimize the einsum code contraction order using the Simulated Annealing bipartition + Greedy approach. size_dict is a dictionary that specifies leg dimensions. Check the docstring of SABipartite for detailed explaination of other input arguments.

References

source
OMEinsumContractionOrders.optimize_treeMethod
optimize_tree(code, size_dict; sc_target=20, βs=0.1:0.1:10, ntrials=2, niters=100, sc_weight=1.0, rw_weight=0.2, initializer=:greedy, greedy_method=MinSpaceOut(), greedy_nrepeat=1, fixed_slices=[])

Optimize the einsum contraction pattern specified by code, and edge sizes specified by size_dict. Check the docstring of TreeSA for detailed explaination of other input arguments.

source
OMEinsumContractionOrders.tree_greedyMethod
tree_greedy(incidence_list, log2_sizes; method=MinSpaceOut())

Compute greedy order, and the time and space complexities, the rows of the incidence_list are vertices and columns are edges. log2_sizes are defined on edges.

julia> code = ein"(abc,cde),(ce,sf,j),ak->ael"
aec, ec, ak -> ael
├─ ce, sf, j -> ec
│  ├─ sf
│  ├─ j
│  └─ ce
├─ ak
└─ abc, cde -> aec
   ├─ cde
   └─ abc


julia> optimize_greedy(code, Dict([c=>2 for c in "abcdefjkls"]))
ae, ak -> ea
├─ ak
└─ aec, ec -> ae
   ├─ ce,  -> ce
   │  ├─ sf, j -> 
   │  │  ├─ j
   │  │  └─ sf
   │  └─ ce
   └─ abc, cde -> aec
      ├─ cde
      └─ abc
source