Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .JuliaFormatter.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
style = "sciml"
format_markdown = true
format_docstrings = true
97 changes: 44 additions & 53 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,44 +1,34 @@
# ComponentArrays.jl
![](assets/logo.png)

| **Documentation** | **Build Status** |
|:-------------------------------------------------------------------------:|:-----------------------------------------------------------:|
| [![][docs-stable-img]][docs-stable-url] [![][docs-dev-img]][docs-dev-url] | [![][build-img]][build-url] [![][codecov-img]][codecov-url] |


[docs-dev-img]: https://img.shields.io/badge/docs-latest-blue.svg
[docs-dev-url]: https://docs.sciml.ai/ComponentArrays/dev/

[docs-stable-img]: https://img.shields.io/badge/docs-stable-blue.svg
[docs-stable-url]: https://docs.sciml.ai/ComponentArrays/stable/

[build-img]: https://img.shields.io/github/actions/workflow/status/SciML/ComponentArrays.jl/ci.yml
[build-url]: https://github.com/SciML/ComponentArrays.jl/actions/workflows/ci.yml
![](assets/logo.png)

[codecov-img]: https://img.shields.io/codecov/c/github/SciML/ComponentArrays.jl
[codecov-url]: https://codecov.io/gh/SciML/ComponentArrays.jl
| **Documentation** | **Build Status** |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
| [![](https://img.shields.io/badge/docs-stable-blue.svg)](https://docs.sciml.ai/ComponentArrays/stable/) [![](https://img.shields.io/badge/docs-latest-blue.svg)](https://docs.sciml.ai/ComponentArrays/dev/) | [![](https://img.shields.io/github/actions/workflow/status/SciML/ComponentArrays.jl/ci.yml)](https://github.com/SciML/ComponentArrays.jl/actions/workflows/ci.yml) [![](https://img.shields.io/codecov/c/github/SciML/ComponentArrays.jl)](https://codecov.io/gh/SciML/ComponentArrays.jl) |

The main export of this package is the ````ComponentArray```` type. "Components" of ````ComponentArray````s
are really just array blocks that can be accessed through a named index. This will create a new ```ComponentArray``` whose data is a view into the original,
The main export of this package is the ``ComponentArray`` type. "Components" of ``ComponentArray``s
are really just array blocks that can be accessed through a named index. This will create a new `ComponentArray` whose data is a view into the original,
allowing for standalone models to be composed together by simple function composition. In
essence, ```ComponentArray```s allow you to do the things you would usually need a modeling
essence, `ComponentArray`s allow you to do the things you would usually need a modeling
language for, but without actually needing a modeling language. The main targets are for use
in [DifferentialEquations.jl](https://github.com/SciML/DifferentialEquations.jl) and
[Optim.jl](https://github.com/JuliaNLSolvers/Optim.jl), but anything that requires
flat vectors is fair game.

Check out the [NEWS](https://github.com/SciML/ComponentArrays.jl/blob/master/NEWS.md) for new features by minor release version.


## General use
The easiest way to construct 1-dimensional ```ComponentArray```s (aliased as `ComponentVector`) is as if they were ```NamedTuple```s. In fact, a good way to think about them is as arbitrarily nested, mutable ```NamedTuple```s that can be passed through a solver.

The easiest way to construct 1-dimensional `ComponentArray`s (aliased as `ComponentVector`) is as if they were `NamedTuple`s. In fact, a good way to think about them is as arbitrarily nested, mutable `NamedTuple`s that can be passed through a solver.

```julia
julia> c = (a=2, b=[1, 2]);
julia> c = (a = 2, b = [1, 2]);

julia> x = ComponentArray(a=5, b=[(a=20., b=0), (a=33., b=0), (a=44., b=3)], c=c)
julia> x = ComponentArray(a = 5, b = [(a = 20.0, b = 0), (a = 33.0, b = 0), (a = 44.0, b = 3)], c = c)
ComponentVector{Float64}(a = 5.0, b = [(a = 20.0, b = 0.0), (a = 33.0, b = 0.0), (a = 44.0, b = 3.0)], c = (a = 2.0, b = [1.0, 2.0]))

julia> x.c.a = 400; x
julia> x.c.a = 400;
x
ComponentVector{Float64}(a = 5.0, b = [(a = 20.0, b = 0.0), (a = 33.0, b = 0.0), (a = 44.0, b = 3.0)], c = (a = 400.0, b = [1.0, 2.0]))

julia> x[8]
Expand All @@ -57,21 +47,25 @@ julia> collect(x)
1.0
2.0

julia> typeof(similar(x, Int32)) === typeof(ComponentVector{Int32}(a=5, b=[(a=20., b=0), (a=33., b=0), (a=44., b=3)], c=c))
julia> typeof(similar(x, Int32)) === typeof(ComponentVector{Int32}(a = 5, b = [
(a = 20.0, b = 0), (a = 33.0, b = 0), (a = 44.0, b = 3)], c = c))
true
```

`ComponentArray`s can be constructed from existing
`ComponentArray`s (currently nested fields cannot be changed this way):

```julia
julia> x = ComponentVector(a=1, b=2, c=3);
julia> x = ComponentVector(a = 1, b = 2, c = 3);

julia> ComponentVector(x; a=11, new=42)
julia> ComponentVector(x; a = 11, new = 42)
ComponentVector{Int64}(a = 11, b = 2, c = 3, new = 42)
```

Higher dimensional ```ComponentArray```s can be created too, but it's a little messy at the moment. The nice thing for modeling is that dimension expansion through broadcasted operations can create higher-dimensional ```ComponentArray```s automatically, so Jacobian cache arrays that are created internally with ```false .* x .* x'``` will be two-dimensional ```ComponentArray```s (aliased as `ComponentMatrix`) with proper axes. Check out the [ODE with Jacobian](https://github.com/jonniedie/ComponentArrays.jl/blob/master/examples/ODE_jac_example.jl) example in the examples folder to see how this looks in practice.
Higher dimensional `ComponentArray`s can be created too, but it's a little messy at the moment. The nice thing for modeling is that dimension expansion through broadcasted operations can create higher-dimensional `ComponentArray`s automatically, so Jacobian cache arrays that are created internally with `false .* x .* x'` will be two-dimensional `ComponentArray`s (aliased as `ComponentMatrix`) with proper axes. Check out the [ODE with Jacobian](https://github.com/jonniedie/ComponentArrays.jl/blob/master/examples/ODE_jac_example.jl) example in the examples folder to see how this looks in practice.

```julia
julia> x = ComponentArray(a=1, b=[2, 1, 4.0], c=c)
julia> x = ComponentArray(a = 1, b = [2, 1, 4.0], c = c)
ComponentVector{Float64}(a = 1.0, b = [2.0, 1.0, 4.0], c = (a = 2.0, b = [1.0, 2.0]))

julia> x2 = x .* x'
Expand All @@ -84,42 +78,42 @@ julia> x2 = x .* x'
1.0 2.0 1.0 4.0 2.0 1.0 2.0
2.0 4.0 2.0 8.0 4.0 2.0 4.0

julia> x2[:c,:c]
julia> x2[:c, :c]
3×3 ComponentMatrix{Float64} with axes Axis(a = 1, b = 2:3) × Axis(a = 1, b = 2:3)
4.0 2.0 4.0
2.0 1.0 2.0
4.0 2.0 4.0

julia> x2[:a,:a]
julia> x2[:a, :a]
1.0

julia> @view x2[:a,:c]
julia> @view x2[:a, :c]
ComponentVector{Float64,SubArray...}(a = 2.0, b = [1.0, 2.0])

julia> x2[:b,:c]
julia> x2[:b, :c]
3×3 ComponentMatrix{Float64} with axes FlatAxis() × Axis(a = 1, b = 2:3)
4.0 2.0 4.0
2.0 1.0 2.0
8.0 4.0 8.0
```


## Examples

### Differential equation example
This example uses ```@unpack``` from [Parameters.jl](https://github.com/mauro3/Parameters.jl)

This example uses `@unpack` from [Parameters.jl](https://github.com/mauro3/Parameters.jl)
for nice syntax. Example taken from:
https://github.com/JuliaDiffEq/ModelingToolkit.jl/issues/36#issuecomment-536221300

```julia
using ComponentArrays
using DifferentialEquations
using Parameters: @unpack


tspan = (0.0, 20.0)


## Lorenz system
function lorenz!(D, u, p, t; f=0.0)
function lorenz!(D, u, p, t; f = 0.0)
@unpack σ, ρ, β = p
@unpack x, y, z = u

Expand All @@ -129,41 +123,38 @@ function lorenz!(D, u, p, t; f=0.0)
return nothing
end

lorenz_p = (σ=10.0, ρ=28.0, β=8/3)
lorenz_ic = ComponentArray(x=0.0, y=0.0, z=0.0)
lorenz_p = (σ = 10.0, ρ = 28.0, β = 8/3)
lorenz_ic = ComponentArray(x = 0.0, y = 0.0, z = 0.0)
lorenz_prob = ODEProblem(lorenz!, lorenz_ic, tspan, lorenz_p)


## Lotka-Volterra system
function lotka!(D, u, p, t; f=0.0)
function lotka!(D, u, p, t; f = 0.0)
@unpack α, β, γ, δ = p
@unpack x, y = u

D.x = α*x - β*x*y + f
D.x = α*x - β*x*y + f
D.y = -γ*y + δ*x*y
return nothing
end

lotka_p = (α=2/3, β=4/3, γ=1.0, δ=1.0)
lotka_ic = ComponentArray(x=1.0, y=1.0)
lotka_p = (α = 2/3, β = 4/3, γ = 1.0, δ = 1.0)
lotka_ic = ComponentArray(x = 1.0, y = 1.0)
lotka_prob = ODEProblem(lotka!, lotka_ic, tspan, lotka_p)


## Composed Lorenz and Lotka-Volterra system
function composed!(D, u, p, t)
c = p.c #coupling parameter
@unpack lorenz, lotka = u

lorenz!(D.lorenz, lorenz, p.lorenz, t, f=c*lotka.x)
lotka!(D.lotka, lotka, p.lotka, t, f=c*lorenz.x)
lorenz!(D.lorenz, lorenz, p.lorenz, t, f = c*lotka.x)
lotka!(D.lotka, lotka, p.lotka, t, f = c*lorenz.x)
return nothing
end

comp_p = (lorenz=lorenz_p, lotka=lotka_p, c=0.01)
comp_ic = ComponentArray(lorenz=lorenz_ic, lotka=lotka_ic)
comp_p = (lorenz = lorenz_p, lotka = lotka_p, c = 0.01)
comp_ic = ComponentArray(lorenz = lorenz_ic, lotka = lotka_ic)
comp_prob = ODEProblem(composed!, comp_ic, tspan, comp_p)


## Solve problem
# We can solve the composed system...
comp_sol = solve(comp_prob)
Expand All @@ -172,6 +163,6 @@ comp_sol = solve(comp_prob)
lotka_sol = solve(lotka_prob)
```

Notice how cleanly the ```composed!``` function can pass variables from one function to another with no array index juggling in sight. This is especially useful for large models as it becomes harder to keep track top-level model array position when adding new or deleting old components from the model. We could go further and compose ```composed!``` with other components ad (practically) infinitum with no mental bookkeeping.
Notice how cleanly the `composed!` function can pass variables from one function to another with no array index juggling in sight. This is especially useful for large models as it becomes harder to keep track top-level model array position when adding new or deleting old components from the model. We could go further and compose `composed!` with other components ad (practically) infinitum with no mental bookkeeping.

The main benefit, however, is now our differential equations are unit testable. Both ```lorenz``` and ```lotka``` can be run as their own ```ODEProblem``` with ```f``` set to zero to see the unforced response.
The main benefit, however, is now our differential equations are unit testable. Both `lorenz` and `lotka` can be run as their own `ODEProblem` with `f` set to zero to see the unforced response.
20 changes: 10 additions & 10 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -5,27 +5,27 @@ using Documenter.Remotes: GitHub
DocMeta.setdocmeta!(ComponentArrays, :DocTestSetup, :(using ComponentArrays))

makedocs(;
modules=[ComponentArrays],
format=Documenter.HTML(
canonical="https://sciml.github.io/ComponentArrays.jl/stable",
modules = [ComponentArrays],
format = Documenter.HTML(
canonical = "https://sciml.github.io/ComponentArrays.jl/stable",
),
pages=[
pages = [
"Home" => "index.md",
"Quick Start" => "quickstart.md",
"Indexing Behavior" => "indexing_behavior.md",
"Unpacking to StaticArrays" => "static_unpack.md",
"Examples" => [
"examples/DiffEqFlux.md",
"examples/adaptive_control.md",
"examples/ODE_jac.md",
"examples/ODE_jac.md"
],
"API" => "api.md",
"API" => "api.md"
],
repo=GitHub("SciML/ComponentArrays.jl"),
sitename="ComponentArrays.jl",
authors="Jonnie Diegelman",
repo = GitHub("SciML/ComponentArrays.jl"),
sitename = "ComponentArrays.jl",
authors = "Jonnie Diegelman"
)

deploydocs(;
repo="github.com/SciML/ComponentArrays.jl.git",
repo = "github.com/SciML/ComponentArrays.jl.git",
)
2 changes: 1 addition & 1 deletion docs/src/api.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@

```@autodocs
Modules = [ComponentArrays]
```
```
38 changes: 23 additions & 15 deletions docs/src/examples/DiffEqFlux.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
# Neural ODEs with DiffEqFlux

Let's see how easy it is to make dense neural ODE layers from scratch.
Flux is used here just for the `glorot_uniform` function and the `ADAM` optimizer.

This example is taken from [the DiffEqFlux documentation](https://diffeqflux.sciml.ai/dev/Flux/).
This example is taken from [the DiffEqFlux documentation](https://diffeqflux.sciml.ai/dev/Flux/).

```julia
using ComponentArrays
Expand All @@ -16,14 +17,15 @@ using Optim: LBFGS
```

First, let's set up the problem and create the truth data.

```julia
u0 = Float32[2.; 0.]
u0 = Float32[2.0; 0.0]
datasize = 30
tspan = (0.0f0, 1.5f0)

function trueODEfunc(du, u, p, t)
true_A = [-0.1 2.0; -2.0 -0.1]
du .= ((u.^3)'true_A)'
du .= ((u .^ 3)'true_A)'
end

t = range(tspan[1], tspan[2], length = datasize)
Expand All @@ -32,27 +34,31 @@ ode_data = Array(solve(prob, Tsit5(), saveat = t))
```

Next we'll make a function that creates dense neural layer components. It is similar to `Flux.Dense`, except it doesn't handle the activation function. We'll do that separately.

```julia
dense_layer(in, out) = ComponentArray{Float32}(W=glorot_uniform(out, in), b=zeros(out))
dense_layer(in, out) = ComponentArray{Float32}(W = glorot_uniform(out, in), b = zeros(out))
```

Our parameter vector will be a `ComponentArray` that holds the ODE initial conditions and the dense neural layers.

```julia
layers = (L1=dense_layer(2, 50), L2=dense_layer(50, 2))
θ = ComponentArray(u=u0, p=layers)
layers = (L1 = dense_layer(2, 50), L2 = dense_layer(50, 2))
θ = ComponentArray(u = u0, p = layers)
```

We now have convenient struct-like access to the weights and biases of the layers for our neural ODE function while giving our optimizer something that acts like a flat array.

```julia
function dudt(u, p, t)
@unpack L1, L2 = p
return L2.W * tanh.(L1.W * u.^3 .+ L1.b) .+ L2.b
return L2.W * tanh.(L1.W * u .^ 3 .+ L1.b) .+ L2.b
end

prob = ODEProblem(dudt, u0, tspan)
```

```julia
predict_n_ode(θ) = Array(solve(prob, Tsit5(), u0=θ.u, p=θ.p, saveat=t))
predict_n_ode(θ) = Array(solve(prob, Tsit5(), u0 = θ.u, p = θ.p, saveat = t))

function loss_n_ode(θ)
pred = predict_n_ode(θ)
Expand All @@ -63,23 +69,25 @@ loss_n_ode(θ)
```

Let's set up a training observation callback and train!

```julia
cb = function (θ, loss, pred; doplot=false)
cb = function (θ, loss, pred; doplot = false)
display(loss)
# plot current prediction against data
pl = scatter(t, ode_data[1,:], label = "data")
scatter!(pl, t, pred[1,:], label = "prediction")
pl = scatter(t, ode_data[1, :], label = "data")
scatter!(pl, t, pred[1, :], label = "prediction")
display(plot(pl))
return false
end
cb(θ, loss_n_ode(θ)...)

data = Iterators.repeated((), 1000)

res1 = sciml_train(loss_n_ode, θ, ADAM(0.05); cb=cb, maxiters=100)
cb(res1.minimizer, loss_n_ode(res1.minimizer)...; doplot=true)
res1 = sciml_train(loss_n_ode, θ, ADAM(0.05); cb = cb, maxiters = 100)
cb(res1.minimizer, loss_n_ode(res1.minimizer)...; doplot = true)

res2 = sciml_train(loss_n_ode, res1.minimizer, LBFGS(); cb=cb)
cb(res2.minimizer, loss_n_ode(res2.minimizer)...; doplot=true)
res2 = sciml_train(loss_n_ode, res1.minimizer, LBFGS(); cb = cb)
cb(res2.minimizer, loss_n_ode(res2.minimizer)...; doplot = true)
```

![](../assets/DiffEqFlux.gif)
Loading
Loading