OMEinsum.DynamicEinCodeType
DynamicEinCode{LT}
DynamicEinCode(ixs, iy)

Wrapper to eincode-specification that creates a callable object to evaluate the eincode ixs -> iy where ixs are the index-labels of the input-tensors and iy are the index-labels of the output.

example

julia> a, b = rand(2,2), rand(2,2);

julia> OMEinsum.DynamicEinCode((('i','j'),('j','k')),('i','k'))(a, b) ≈ a * b
true
source
OMEinsum.DynamicNestedEinsumType
DynamicNestedEinsum{LT} <: NestedEinsum{LT}
DynamicNestedEinsum(args, eins)
DynamicNestedEinsum{LT}(tensorindex::Int)

Einsum with contraction order, where the type parameter LT is the label type. It has two constructors. One takes a tensorindex as input, which represents the leaf node in a contraction tree. The other takes an iterable of type DynamicNestedEinsum, args, as the siblings, and eins to specify the contraction operation.

source
OMEinsum.EinArrayType
EinArray{T, N, TT, LX, LY, ICT, OCT} <: AbstractArray{T, N}

A struct to hold the intermediate result of an einsum where all index-labels of both input and output are expanded to a rank-N-array whose values are lazily calculated. Indices are arranged as inner indices (or reduced dimensions) first and then outer indices.

Type parameters are

* `T`: element type,
* `N`: array dimension,
* `TT`: type of "tuple of input arrays",
* `LX`: type of "tuple of input indexers",
* `LX`: type of output indexer,
* `ICT`: typeof inner CartesianIndices,
* `OCT`: typeof outer CartesianIndices,
source
OMEinsum.EinCodeType
EinCode <: AbstractEinsum
EinCode(ixs, iy)

Abstract type for sum-product contraction code. The constructor returns a DynamicEinCode instance.

source
OMEinsum.EinIndexerType
EinIndexer{locs,N}

A structure for indexing EinArrays. locs is the index positions (among all indices). In the constructor, size is the size of target tensor,

source
OMEinsum.EinIndexerMethod
EinIndexer{locs}(size::Tuple)

Constructor for EinIndexer for an object of size size where locs are the locations of relevant indices in a larger tuple.

source
OMEinsum.IndexGroupType
IndexGroup

Leaf in a contractiontree, contains the indices and the number of the tensor it describes, e.g. in "ij,jk -> ik", indices "ik" belong to tensor 1, so would be described by IndexGroup(['i','k'], 1).

source
OMEinsum.NestedEinsumConstructorType
NestedEinsumConstructor

describes a (potentially) nested einsum. Important fields:

  • args, vector of all inputs, either IndexGroup objects corresponding to tensors or NestedEinsumConstructor
  • iy, indices of output
source
OMEinsum.SlicedEinsumType
SlicedEinsum{LT, Ein} <: AbstractEinsum

A tensor network with slicing. LT is the label type and Ein is the tensor network.

Fields

  • slicing::Vector{LT}: A vector of labels to slice.
  • eins::Ein: The tensor network.
source
OMEinsum.StaticEinCodeType
StaticEinCode{LT, ixs, iy}

The static version of DynamicEinCode that matches the contraction rule at compile time. It is the default return type of @ein_str macro. LT is the label type.

source
OMEinsum.StaticNestedEinsumType
StaticNestedEinsum{LT,args,eins} <: NestedEinsum{LT}
StaticNestedEinsum(args, eins)
StaticNestedEinsum{LT}(tensorindex::Int)

Einsum with contraction order, where the type parameter LT is the label type, args is a tuple of StaticNestedEinsum, eins is a StaticEinCode and leaf node is defined by setting eins to an integer. It has two constructors. One takes a tensorindex as input, which represents the leaf node in a contraction tree. The other takes an iterable of type DynamicNestedEinsum, args, as the siblings, and eins to specify the contraction operation.

source
Base.getindexMethod
getindex(A::EinArray, inds...)

return the lazily calculated entry of A at index inds.

source
OMEinsum.allow_loopsMethod
allow_loops(flag::Bool)

Setting this to false will cause OMEinsum to log an error if it falls back to loop_einsum evaluation, instead of calling specialised kernels. The default is true.

source
OMEinsum.alluniqueMethod
allunique(ix::Tuple)

return true if all elements of ix appear only once in ix.

example

julia> using OMEinsum: allunique

julia> allunique((1,2,3,4))
true

julia> allunique((1,2,3,1))
false
source
OMEinsum.asarrayMethod
asarray(x[, parent::AbstractArray]) -> AbstactArray

Return a 0-dimensional array with item x, otherwise, do nothing. If a parent is supplied, it will try to match the parent array type.

source
OMEinsum.back_propagateMethod
back_propagate(f, code, cache, ȳ, size_dict)

Back propagate the message through the cached tree cache and return a tree storing the intermediate messages. The message can be gradients et al.

Arguments

  • f: The back-propagation rule. The signature is f(eins, xs, y, size_dict, dy) -> dxs, where
    • eins: The contraction code at the current node.
    • xs: The input tensors at the current node.
    • y: The output tensor at the current node.
    • size_dict: The size dictionary, which maps the label to the size of the corresponding dimension.
    • dy: The message on the output tensor (y) to back-propagate through the current node.
    • dxs: The message on the input tensors (xs) as the result of back-propagation.
  • code: The contraction code, which can be a NestedEinsum or a SlicedEinsum.
  • cache: The cached intermediate results, which can be generated by cached_einsum.
  • : The message to back-propagate.
  • size_dict: The size dictionary, which maps the label to the size of the corresponding dimension.

Returns

  • CacheTree: The tree storing the intermediate messages.
source
OMEinsum.cached_einsumMethod
cached_einsum(code, xs, size_dict)

Compute the einsum contraction and cache the intermediate contraction results.

Arguments

  • code: The contraction code, which can be a NestedEinsum or a SlicedEinsum.
  • xs: The input tensors.
  • size_dict: The size dictionary, which maps the label to the size of the corresponding dimension.

Returns

  • CacheTree: The cached tree storing the intermediate results.
source
OMEinsum.cost_and_gradientFunction
cost_and_gradient(code, xs, ȳ)

Compute the cost and the gradients w.r.t the input tensors xs.

Arguments

  • code: The contraction code, which can be a NestedEinsum or a SlicedEinsum.
  • xs: The input tensors.
  • : The message to back-propagate. Default is 1.

Returns

  • cost: The cost of the contraction.
  • grads: The gradients w.r.t the input tensors.
source
OMEinsum.einarrayMethod
einarray(::Val{ixs}, Val{iy}, xs, size_dict) -> EinArray

Constructor of EinArray from an EinCode, a tuple of tensors xs and a size_dict that assigns each index-label a size. The returned EinArray holds an intermediate result of the einsum specified by the EinCode with indices corresponding to all unique labels in the einsum. Reduction over the (lazily calculated) dimensions that correspond to labels not present in the output lead to the result of the einsum.

example

julia> using OMEinsum: get_size_dict

julia> a, b = rand(2,2), rand(2,2);

julia> sd = get_size_dict((('i','j'),('j','k')), (a, b));

julia> ea = OMEinsum.einarray(Val((('i','j'),('j','k'))),Val(('i','k')), (a,b), sd);

julia> dropdims(sum(ea, dims=1), dims=1) ≈ a * b
true
source
OMEinsum.einsumFunction
einsum(code::EinCode, xs, size_dict)

Return the tensor that results from contracting the tensors xs according to the contraction code code.

Arguments

  • code: The einsum notation, which can be an instance of EinCode, NestedEinsum, or SlicedEinsum.
  • xs - the input tensors
  • size_dict - a dictionary that maps index-labels to their sizes

Examples

julia> a, b = rand(2,2), rand(2,2);

julia> einsum(EinCode((('i','j'),('j','k')),('i','k')), (a, b)) ≈ a * b
true

julia> einsum(EinCode((('i','j'),('j','k')),('k','i')), (a, b)) ≈ permutedims(a * b, (2,1))
true
source
OMEinsum.einsum!Function
einsum!(code::EinCode, xs, y, sx, sy, size_dict)

Inplace version of einsum. The result is stored in y.

Arguments

  • code: The einsum notation, which can be an instance of EinCode, NestedEinsum, or SlicedEinsum.
  • xs: The input tensors.
  • y: The output tensor.
  • sx: Scale x by sx.
  • sy: Scale y by sy.
  • size_dict: A dictionary that maps index-labels to their sizes.
source
OMEinsum.einsum_gradMethod
einsum_grad(ixs, xs, iy, size_dict, cdy, i)

return the gradient of the result of evaluating the EinCode w.r.t the ith tensor in xs. cdy is the result of applying the EinCode to the xs.

example

julia> using OMEinsum: einsum_grad, get_size_dict

julia> a, b = rand(2,2), rand(2,2);

julia> c = einsum(EinCode((('i','j'),('j','k')), ('i','k')), (a,b));

julia> sd = get_size_dict((('i','j'),('j','k')), (a,b));

julia> einsum_grad((('i','j'),('j','k')), (a,b), ('i','k'), sd, c, 1) ≈ c * transpose(b)
true
source
OMEinsum.filliys!Method
filliys!(neinsum::NestedEinsumConstructor)

goes through all NestedEinsumConstructor objects in the tree and saves the correct iy in them.

source
OMEinsum.get_size_dict!Method
get_size_dict!(ixs, xs, size_info)

return a dictionary that is used to get the size of an index-label in the einsum-specification with input-indices ixs and tensors xs after consistency within ixs and between ixs and xs has been verified.

source
OMEinsum.getixsvMethod
getixsv(code)

Get labels of input tensors for EinCode, NestedEinsum and some other einsum like objects. Returns a vector of vectors.

julia> getixsv(ein"(ij,jk),k->i")
3-element Vector{Vector{Char}}:
 ['i', 'j']
 ['j', 'k']
 ['k']
source
OMEinsum.getiyvMethod
getiy(code)

Get labels of the output tensor for EinCode, NestedEinsum and some other einsum like objects. Returns a vector.

julia> getiyv(ein"(ij,jk),k->i")
1-element Vector{Char}:
 'i': ASCII/Unicode U+0069 (category Ll: Letter, lowercase)
source
OMEinsum.indices_and_locsMethod
indices_and_locs(ixs,iy)

given the index-labels of input and output of an einsum, return (in the same order):

  • a tuple of the distinct index-labels of the output iy
  • a tuple of the distinct index-labels in ixs of the input not appearing in the output iy
  • a tuple of tuples of locations of an index-label in the ixs in a list of all index-labels
  • a tuple of locations of index-labels in iy in a list of all index-labels

where the list of all index-labels is simply the first and the second output catenated and the second output catenated.

source
OMEinsum.loop_einsum!Method
loop_einsum!(ixs, iy, xs, y, sx, sy, size_dict)

inplace-version of loop_einsum, saving the result in a preallocated tensor of correct size y.

source
OMEinsum.loop_einsumMethod
loop_einsum(::EinCode, xs, size_dict)

evaluates the eincode specified by EinCode and the tensors xs by looping over all possible indices and calculating the contributions ot the result. Scales exponentially in the number of distinct index-labels.

source
OMEinsum.map_prodMethod
map_prod(xs, ind, indexers)

calculate the value of an EinArray with EinIndexers indexers at location ind.

source
OMEinsum.match_ruleMethod
match_rule(ixs, iy)
match_rule(code::EinCode)

Returns the rule that matches, otherwise use DefaultRule - the slow loop_einsum backend.

source
OMEinsum.nopermuteMethod
nopermute(ix,iy)

check that all values in iy that are also in ix have the same relative order,

example

julia> using OMEinsum: nopermute

julia> nopermute((1,2,3),(1,2))
true

julia> nopermute((1,2,3),(2,1))
false

e.g. nopermute((1,2,3),(1,2)) is true while nopermute((1,2,3),(2,1)) is false

source
OMEinsum.parse_parensMethod
parse_parens(s::AbstractString, i, narg)

parse one level of parens starting at index i where narg counts which tensor the current group of indices, e.g. "ijk", belongs to. Recursively calls itself for each new opening paren that's opened.

source
OMEinsum.@ein!Macro
@ein! A[i,k] := B[i,j] * C[j,k]     # A = B * C
@ein! A[i,k] += B[i,j] * C[j,k]     # A += B * C

Macro interface similar to that of other packages.

Inplace version of @ein.

example

julia> a, b, c, d = rand(2,2), rand(2,2), rand(2,2), zeros(2,2);

julia> cc = copy(c);

julia> @ein! d[i,k] := a[i,j] * b[j,k];

julia> d ≈ a * b
true

julia> d ≈ ein"ij,jk -> ik"(a,b)
true

julia> @ein! c[i,k] += a[i,j] * b[j,k];

julia> c ≈ cc + a * b
true
source
OMEinsum.@einMacro
@ein A[i,k] := B[i,j] * C[j,k]     # A = B * C

Macro interface similar to that of other packages.

You may use numbers in place of letters for dummy indices, as in @tensor, and need not name the output array. Thus A = @ein [1,2] := B[1,ξ] * C[ξ,2] is equivalent to the above. This can also be written A = ein"ij,jk -> ik"(B,C) using the numpy-style string macro.

example

julia> a, b = rand(2,2), rand(2,2);

julia> @ein c[i,k] := a[i,j] * b[j,k];

julia> c ≈ a * b
true

julia> c ≈ ein"ij,jk -> ik"(a,b)
true
source
OMEinsum.@ein_strMacro
ein"ij,jk -> ik"(A,B)

String macro interface which understands numpy.einsum's notation. Translates strings into StaticEinCode-structs that can be called to evaluate an einsum. To control evaluation order, use parentheses - instead of an EinCode, a NestedEinsum is returned which evaluates the expression according to parens. The valid character ranges for index-labels are a-z and α-ω.

example

julia> a, b, c = rand(10,10), rand(10,10), rand(10,1);

julia> ein"ij,jk,kl -> il"(a,b,c) ≈ ein"(ij,jk),kl -> il"(a,b,c) ≈ a * b * c
true
source
OMEinsum.@optein_strMacro
optein"ij,jk,kl -> ik"(A, B, C)

String macro interface that similar to @ein_str, with optimized contraction order (dimensions are assumed to be uniform).

source
OMEinsumContractionOrders.ExactTreewidthType
struct ExactTreewidth{GM} <: CodeOptimizer
ExactTreewidth(greedy_config::GM = GreedyMethod(nrepeat=1))

A optimizer using the exact tree width solver proved in TreeWidthSolver.jl, the greedy_config is the configuration for the greedy method, which is used to solve the subproblems in the tree decomposition.

Fields

  • greedy_config::GM: The configuration for the greedy method.
source
OMEinsumContractionOrders.GreedyMethodType
GreedyMethod{MT}
GreedyMethod(; α = 0.0, temperature = 0.0, nrepeat=10)

The fast but poor greedy optimizer. Input arguments are

* `α` is the parameter for the loss function, for pairwise interaction, L = size(out) - α * (size(in1) + size(in2))
* `temperature` is the parameter for sampling, if it is zero, the minimum loss is selected; for non-zero, the loss is selected by the Boltzmann distribution, given by p ~ exp(-loss/temperature).
* `nrepeat` is the number of repeatition, returns the best contraction order.
source
OMEinsumContractionOrders.KaHyParBipartiteType
KaHyParBipartite{RT,IT,GM}
KaHyParBipartite(; sc_target, imbalances=collect(0.0:0.005:0.8),
    max_group_size=40, greedy_config=GreedyMethod())

Optimize the einsum code contraction order using the KaHyPar + Greedy approach. This program first recursively cuts the tensors into several groups using KaHyPar, with maximum group size specifed by max_group_size and maximum space complexity specified by sc_target, Then finds the contraction order inside each group with the greedy search algorithm. Other arguments are

  • sc_target is the target space complexity, defined as log2(number of elements in the largest tensor),
  • imbalances is a KaHyPar parameter that controls the group sizes in hierarchical bipartition,
  • max_group_size is the maximum size that allowed to used greedy search,
  • greedy_config is a greedy optimizer.

References

source
OMEinsumContractionOrders.MergeGreedyType
MergeGreedy <: CodeSimplifier
MergeGreedy(; threshhold=-1e-12)

Contraction code simplifier (in order to reduce the time of calling optimizers) that merges tensors greedily if the space complexity of merged tensors is reduced (difference smaller than the threshhold).

source
OMEinsumContractionOrders.SABipartiteType
SABipartite{RT,BT}
SABipartite(; sc_target=25, ntrials=50, βs=0.1:0.2:15.0, niters=1000
    max_group_size=40, greedy_config=GreedyMethod(), initializer=:random)

Optimize the einsum code contraction order using the Simulated Annealing bipartition + Greedy approach. This program first recursively cuts the tensors into several groups using simulated annealing, with maximum group size specifed by max_group_size and maximum space complexity specified by sc_target, Then finds the contraction order inside each group with the greedy search algorithm. Other arguments are

  • size_dict, a dictionary that specifies leg dimensions,
  • sc_target is the target space complexity, defined as log2(number of elements in the largest tensor),
  • max_group_size is the maximum size that allowed to used greedy search,
  • βs is a list of inverse temperature 1/T,
  • niters is the number of iteration in each temperature,
  • ntrials is the number of repetition (with different random seeds),
  • sub_optimizer, the optimizer for the bipartited sub graphs, one can choose GreedyMethod() or TreeSA(),
  • initializer, the partition configuration initializer, one can choose :random or :greedy (slow but better).

References

source
OMEinsumContractionOrders.TreeSAType
TreeSA{RT,IT,GM}
TreeSA(; sc_target=20, βs=collect(0.01:0.05:15), ntrials=10, niters=50,
    sc_weight=1.0, rw_weight=0.2, initializer=:greedy, greedy_config=GreedyMethod(; nrepeat=1))

Optimize the einsum contraction pattern using the simulated annealing on tensor expression tree.

  • sc_target is the target space complexity,
  • ntrials, βs and niters are annealing parameters, doing ntrials indepedent annealings, each has inverse tempteratures specified by βs, in each temperature, do niters updates of the tree.
  • sc_weight is the relative importance factor of space complexity in the loss compared with the time complexity.
  • rw_weight is the relative importance factor of memory read and write in the loss compared with the time complexity.
  • initializer specifies how to determine the initial configuration, it can be :greedy or :random. If it is using :greedy method to generate the initial configuration, it also uses two extra arguments greedy_method and greedy_nrepeat.
  • nslices is the number of sliced legs, default is 0.
  • fixed_slices is a vector of sliced legs, default is [].

References

source
OMEinsumContractionOrders.contraction_complexityMethod
contraction_complexity(eincode, size_dict) -> ContractionComplexity

Returns the time, space and read-write complexity of the einsum contraction. The returned object contains 3 fields:

  • time complexity tc defined as log2(number of element-wise multiplications).
  • space complexity sc defined as log2(size of the maximum intermediate tensor).
  • read-write complexity rwc defined as log2(the number of read-write operations).
source
OMEinsumContractionOrders.exact_treewidth_methodMethod
exact_treewidth_method(incidence_list::IncidenceList{VT,ET}, log2_edge_sizes; α::TA = 0.0, temperature::TT = 0.0, nrepeat=1) where {VT,ET,TA,TT}

This function calculates the exact treewidth of a graph using TreeWidthSolver.jl. It takes an incidence list representation of the graph (incidence_list) and a dictionary of logarithm base 2 edge sizes (log2_edge_sizes) as input. The function also accepts optional parameters α, temperature, and nrepeat with default values of 0.0, 0.0, and 1 respectively, which are parameter of the GreedyMethod used in the contraction process as a sub optimizer.

Arguments

  • incidence_list: An incidence list representation of the graph.
  • log2_edge_sizes: A dictionary of logarithm base 2 edge sizes.

Returns

  • The function returns a ContractionTree representing the contraction process.
julia> optimizer = OMEinsumContractionOrders.ExactTreewidth()
OMEinsumContractionOrders.ExactTreewidth{GreedyMethod{Float64, Float64}}(GreedyMethod{Float64, Float64}(0.0, 0.0, 1))

julia> eincode = OMEinsumContractionOrders.EinCode([['a', 'b'], ['a', 'c', 'd'], ['b', 'c', 'e', 'f'], ['e'], ['d', 'f']], ['a'])
ab, acd, bcef, e, df -> a

julia> size_dict = Dict([c=>(1<<i) for (i,c) in enumerate(['a', 'b', 'c', 'd', 'e', 'f'])]...)
Dict{Char, Int64} with 6 entries:
  'f' => 64
  'a' => 2
  'c' => 8
  'd' => 16
  'e' => 32
  'b' => 4

julia> optcode = optimize_code(eincode, size_dict, optimizer)
ab, ab -> a
├─ ab
└─ fac, bcf -> ab
   ├─ df, acd -> fac
   │  ├─ df
   │  └─ acd
   └─ e, bcef -> bcf
      ├─ e
      └─ bcef
source
OMEinsumContractionOrders.flopMethod
flop(eincode, size_dict) -> Int

Returns the number of iterations, which is different with the true floating point operations (FLOP) by a factor of 2.

source
OMEinsumContractionOrders.label_elimination_orderMethod
label_elimination_order(code) -> Vector

Returns a vector of labels sorted by the order they are eliminated in the contraction tree. The contraction tree is specified by code, which e.g. can be a NestedEinsum instance.

source
OMEinsumContractionOrders.optimize_codeFunction
optimize_code(eincode, size_dict, optimizer = GreedyMethod(), simplifier=nothing, permute=true) -> optimized_eincode

Optimize the einsum contraction code and reduce the time/space complexity of tensor network contraction. Returns a NestedEinsum instance. Input arguments are

  • eincode is an einsum contraction code instance, one of DynamicEinCode, StaticEinCode or NestedEinsum.
  • size is a dictionary of "edge label=>edge size" that contains the size information, one can use uniformsize(eincode, 2) to create a uniform size.
  • optimizer is a CodeOptimizer instance, should be one of GreedyMethod, ExactTreewidth, KaHyParBipartite, SABipartite or TreeSA. Check their docstrings for details.
  • simplifier is one of MergeVectors or MergeGreedy.
  • optimize the permutation if permute is true.

Examples

julia> using OMEinsum

julia> code = ein"ij, jk, kl, il->"
ij, jk, kl, il -> 
julia> optimize_code(code, uniformsize(code, 2), TreeSA())
SlicedEinsum{Char, NestedEinsum{DynamicEinCode{Char}}}(Char[], ki, ki -> 
├─ jk, ij -> ki
│  ├─ jk
│  └─ ij
└─ kl, il -> ki
   ├─ kl
   └─ il
)
source
OMEinsumContractionOrders.optimize_exact_treewidthMethod
optimize_exact_treewidth(optimizer, eincode, size_dict)

Optimizing the contraction order via solve the exact tree width of the line graph corresponding to the eincode and return a NestedEinsum object. Check the docstring of exact_treewidth_method for detailed explaination of other input arguments.

source
OMEinsumContractionOrders.optimize_greedyMethod
optimize_greedy(eincode, size_dict; α = 0.0, temperature = 0.0, nrepeat=10)

Greedy optimizing the contraction order and return a NestedEinsum object. Check the docstring of tree_greedy for detailed explaination of other input arguments.

source
OMEinsumContractionOrders.optimize_kahyparMethod
optimize_kahypar(code, size_dict; sc_target, max_group_size=40, imbalances=0.0:0.01:0.2, greedy_method=MinSpaceOut(), greedy_nrepeat=10)

Optimize the einsum code contraction order using the KaHyPar + Greedy approach. size_dict is a dictionary that specifies leg dimensions. Check the docstring of KaHyParBipartite for detailed explaination of other input arguments.

source
OMEinsumContractionOrders.optimize_kahypar_autoMethod
optimize_kahypar_auto(code, size_dict; max_group_size=40, sub_optimizer = GreedyMethod())

Find the optimal contraction order automatically by determining the sc_target with bisection. It can fail if the tree width of your graph is larger than 100.

source
OMEinsumContractionOrders.optimize_saMethod
optimize_sa(code, size_dict; sc_target, max_group_size=40, βs=0.1:0.2:15.0, niters=1000, ntrials=50,
       sub_optimizer = GreedyMethod(), initializer=:random)

Optimize the einsum code contraction order using the Simulated Annealing bipartition + Greedy approach. size_dict is a dictionary that specifies leg dimensions. Check the docstring of SABipartite for detailed explaination of other input arguments.

References

source
OMEinsumContractionOrders.optimize_treeMethod
optimize_tree(code, size_dict; sc_target=20, βs=0.1:0.1:10, ntrials=2, niters=100, sc_weight=1.0, rw_weight=0.2, initializer=:greedy, greedy_method=MinSpaceOut(), fixed_slices=[])

Optimize the einsum contraction pattern specified by code, and edge sizes specified by size_dict. Check the docstring of TreeSA for detailed explaination of other input arguments.

source
OMEinsumContractionOrders.tree_greedyMethod
tree_greedy(incidence_list, log2_sizes; α = 0.0, temperature = 0.0, nrepeat=10)

Compute greedy order, and the time and space complexities, the rows of the incidence_list are vertices and columns are edges. log2_sizes are defined on edges. α is the parameter for the loss function, for pairwise interaction, L = size(out) - α * (size(in1) + size(in2)) temperature is the parameter for sampling, if it is zero, the minimum loss is selected; for non-zero, the loss is selected by the Boltzmann distribution, given by p ~ exp(-loss/temperature).

julia> code = ein"(abc,cde),(ce,sf,j),ak->ael"
aec, ec, ak -> ael
├─ ce, sf, j -> ec
│  ├─ sf
│  ├─ j
│  └─ ce
├─ ak
└─ abc, cde -> aec
   ├─ cde
   └─ abc


julia> optimize_greedy(code, Dict([c=>2 for c in "abcdefjkls"]))
ae, ak -> ea
├─ ak
└─ aec, ec -> ae
   ├─ ce,  -> ce
   │  ├─ sf, j -> 
   │  │  ├─ j
   │  │  └─ sf
   │  └─ ce
   └─ abc, cde -> aec
      ├─ cde
      └─ abc
source