Misc. Utilities

NaiveGAflux.MutationShieldType
MutationShield <: DecoratingTrait
MutationShield(t, allowed...)

Shields its associated vertex from being selected for mutation.

Any types in allowed will be allowed to mutate the vertex if supplied when calling allow_mutation.

Note that vertex might still be modified if an adjacent vertex is mutated in a way which propagates to a shielded vertex.

source
NaiveGAflux.ApplyIfType
ApplyIf <: DecoratingTrait
ApplyIf(predicate::Function, apply::Function, base::MutationTrait)

Enables calling apply(v) for an AbstractVertex v which has this trait if 'predicate(v) == true'.

Motivating use case is to have a way to remove vertices which have ended up as noops, e.g. element wise and concatenation vertices with a single input or identity activation functions.

source
NaiveGAflux.PersistentArrayType
PersistentArray{T, N} <: AbstractArray{T, N}
PersistentArray(savedir::String, nr::Integer, generator;suffix=".jls")
PersistentArray(savedir::String, suffix::String, data::Array)

Simple persistent array. Can be created from serialized data and can be asked to persist its elements using persist.

Note that once initialized, the array is not backed by the serialized data. Adding/deleting files is not reflected in data and vice versa.

source
NaiveGAflux.ShieldedOptType
ShieldedOpt{R} <: Flux.Optimise.AbstractOptimiser 
ShieldedOpt(rule)

Shields rule from mutation by OptimiserMutation.

source
NaiveGAflux.AutoOptimiserExperimental.AutoOptimiserType
AutoOptimiser{L}
AutoOptimiser(gradfun, layer, opt)
AutoOptimiser(gradfun, opt)
AutoOptimiser(opt)

Updates parameters for layer whenever gradients are computed during the backwards pass.

This bending of the (r)rules serves the following purposes:

  1. It allows for gradients to be gc:ed/freed eagerly, reducing the memory pressure.
  2. It prevents explosive compile times when updating parameters for large and nested models.
  3. It allows optimiser state to be mutated and transformed (e.g. gpu/cpu) together with the model (not implemented yet).

Designed to be used as layerfun argument to fluxvertex. In particular, AutoOptimiser(o) and AutoOptimiser(gradfun, o) where o is an Optimiser.AbstractRule returns a function which expects the layer to enable constructs like layerfun = LazyMutable ∘ AutoOptimiser(Descent()).

Parameters are updated through gradfun(optstate, layer, grad) which is expected to return a tuple with the new optimiser state and the gradient for layer (typically either grad or NoTangent()). Note that this is different compared to what Optimiser.update! returns, meaning that Optimisers.update! by itself can't be given as gradfun (although gradfun typically calls Optimiser.update!).

source