Browse Source

Function Conflict Bug Fixes (#174)

* Fix redefined headfunction between typegraphs.jl and exprs.jl

* Fix test errors with package and function conflicts

* Added missing dependency
pull/192/head v0.2.0
Micah Halter 2 years ago
committed by GitHub
parent
commit
12504c8640
No known key found for this signature in database GPG Key ID: 4AEE18F83AFDEB23
  1. 4
      Project.toml
  2. 2
      docker/Dockerfile
  3. 3
      src/modeltools/typegraphs.jl
  4. 16
      test/odegraft.jl
  5. 32
      test/runtests.jl
  6. 28
      test/workflow.jl

4
Project.toml

@ -29,7 +29,9 @@ DiffEqBase = "2b5f629d-d688-5b77-993f-72d75c75574e"
DifferentialEquations = "0c46a032-eb83-5123-abaf-570d42b7fbaa"
LsqFit = "2fda8390-95c7-5789-9bda-21331edee243"
Polynomials = "f27b6e38-b328-58d1-80ce-0feddd5e7a45"
Printf = "de0858da-6303-5e67-8744-51eddeeeb8d7"
Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
Test = "8dfed614-e22c-5e08-85e1-65c5234f0b40"
[targets]
test = ["DifferentialEquations", "LsqFit", "Polynomials", "Test"]
test = ["DifferentialEquations", "LsqFit", "Polynomials", "Printf", "Statistics", "Test"]

2
docker/Dockerfile

@ -15,7 +15,7 @@ RUN julia -e 'ENV["JUPYTER"]="jupyter"; using Pkg; Pkg.add("IJulia")'
RUN julia --project -e 'using Pkg; Pkg.develop("SemanticModels");'
RUN julia --project -e 'using Pkg; \
Pkg.add(["DifferentialEquations", "LsqFit", "Polynomials", "Test"]); \
Pkg.add(["DifferentialEquations", "LsqFit", "Polynomials", "Printf", "Statistics", "Test"]); \
pkg"precompile";'
## Install nlp example pip packages

3
src/modeltools/typegraphs.jl

@ -48,9 +48,6 @@ function Edges(snapshots)
return fs
end
head(x) = :nothing
head(x::Expr) = x.head
annotate(x::Any) = x
function annotate(x::Expr)

16
test/odegraft.jl

@ -12,7 +12,7 @@ using SemanticModels.ModelTools
using SemanticModels.ModelTools.ExpODEModels
# ## Loading the original model
# We use parsefile to load the model into an expression. The original model is an SEIR model which has 4 states suceptible, exposed, infected, and recovered. It has parameters $\beta, \gamma, \mu, \sigma$.
# We use parsefile to load the model into an expression. The original model is an SEIR model which has 4 states suceptible, exposed, infected, and recovered. It has parameters $\beta, \gamma, \mu, \sigma$.
expr1 = parsefile("../examples/epicookbook/src/SEIRmodel.jl")
model1 = model(ExpODEModel, expr1)
@ -60,29 +60,29 @@ pushfirst!(bodyblock(model1.funcs[1]), :(N = sum(Y)))
pusharg!(model1.funcs[1], :r)
# gensym gives us a unique name for the new function
g = gensym(argslist(model1.funcs[1])[1])
argslist(model1.funcs[1])[1] = g
g_func = gensym(argslist(model1.funcs[1])[1])
argslist(model1.funcs[1])[1] = g_func
# ## Model Augmentations often require new parameters
#
# When we add the population growth term to the SEIR model, we introduce a new parameter $r$
# that needs to be supplied to the model. One problem with approaches that require scientists
# to modify source code is the fact that adding the new features necessitates changes to the
# to modify source code is the fact that adding the new features necessitates changes to the
# APIs provided by the original author. SemanticModels.ModelTools provides a higher level API
# for making these changes that assist in propagating the necessary changes to the API.
#
# For example, in this code we need to add an argument to the entrypoint function `main` and
# For example, in this code we need to add an argument to the entrypoint function `main` and
# provide an anonymous function that conforms to the API that `DifferentialEquations` expects
# from its inputs.
mainx = findfunc(model1.expr, :main)[end]
pusharg!(mainx, :λ)
# An `ODEProblem` expects the user to provide a function $f(du, u, p, t)$ which takes the current fluxes, current system state, parameters, and current time as its arguments and updates the value of `du`. Since our new function `g` does not satisfy this interface, we need to introduce a wrapper function that does.
# An `ODEProblem` expects the user to provide a function $f(du, u, p, t)$ which takes the current fluxes, current system state, parameters, and current time as its arguments and updates the value of `du`. Since our new function `g_func` does not satisfy this interface, we need to introduce a wrapper function that does.
#
# Here is an instance where having a smart compiler helps julia. In many dynamic languages where this kind of metaprogramming would be easy, the runtime is not smart enough to inline these anonymous functions, which means that there is additional runtime performance overhead to metaporgramming like this. Julia's compiler (and LLVM) can inline these functions which drastically reduces that overhead.
setarg!(model1.calls[end], :seir_ode, :((du,u,p,t)->$g(du,u,p,t,λ)))
setarg!(model1.calls[end], :seir_ode, :((du,u,p,t)->$g_func(du,u,p,t,λ)))
@show model1.expr
NewModule = eval(model1.expr)
@ -119,5 +119,3 @@ end
# This simulation allows an epidemiologist to examine the effects of population growth on an SEIR disease outbreak. A brief analysis of this simulation shows that as you increase the population growth rate, you increase the final population of infected people. More sophisticated analysis could be employed to show something more interesting about this model.
#
# We have shown how you can use SemanticModels.jl to combine features of various ODE systems and solve them with a state of the art solver to increase the capabilities of a code that implements a scientific model. We call this combination process grafting and believe that it supports a frequent use case of scientific programming.

32
test/runtests.jl

@ -9,9 +9,15 @@ using GLM
using DataFrames
using Plots
include("parse.jl")
include("cassette.jl")
include("transform/ode.jl")
tests = ["parse.jl",
"cassette.jl",
"transform/ode.jl"]
for test in tests
@testset "Running $test" begin
include(test)
end
end
examples = ["agentbased.jl",
"agentgraft.jl",
@ -23,12 +29,16 @@ examples = ["agentbased.jl",
"pseudo_polynomial_regression.jl",
"odegraft.jl",
]
for ex in examples
@info "Running example: " file=ex
try
include(ex)
catch err
println(err)
@warn "Error running: " file=ex
end
@testset "Test all examples" begin
for ex in examples
@info "Running example: " file=ex
try
include(ex)
@test true == true
catch err
println(err)
@info "Error running " file=ex
@test true == false
end
end
end

28
test/workflow.jl

@ -9,7 +9,7 @@
#
# As taught by the scientific computing education group [Software Carpentry](https://swcarpentry.github.io/), the best practice for composing scientific models is to have each component write files to disk and then use a workflow tool such as [Make](https://swcarpentry.github.io/make-novice/) to orchestrate the execution of the modeling scripts.
#
# An alternative approach is to design modeling frameworks for representing the models. The problem with this avenue becomes apparent when models are composed. The frameworks must be interoperable in order to make combined models. ModelTools avoids this problem by representing the models as code and manipulating the codes. The interoperation of two models is defined by user supplied functions in a fully featured programming language.
# An alternative approach is to design modeling frameworks for representing the models. The problem with this avenue becomes apparent when models are composed. The frameworks must be interoperable in order to make combined models. ModelTools avoids this problem by representing the models as code and manipulating the codes. The interoperation of two models is defined by user supplied functions in a fully featured programming language.
using SemanticModels.Parsers
using SemanticModels.ModelTools
@ -29,7 +29,7 @@ println("demo parameters:\n\tsamples=$samples\n\tnsteps=$nsteps")
# ## Baseline SIRS model
#
# Here is the baseline model, which is read in from a text file. You could instead of using `parsefile` use a `quote/end` block to code up the baseline model in this script.
# Here is the baseline model, which is read in from a text file. You could instead of using `parsefile` use a `quote/end` block to code up the baseline model in this script.
#
# <img src="https://docs.google.com/drawings/d/e/2PACX-1vSeA7mAQ-795lLVxCWXzbkFQaFOHMpwtB121psFV_2cSUyXPyKMtvDjssia82JvQRXS08p6FAMr1hj1/pub?w=1031&amp;h=309">
@ -38,10 +38,10 @@ m = model(ExpStateModel, expr)
function returns(block::Vector{Any})
filter(x->(head(x)==:return), block)
filter(x->(ModelTools.head(x)==:return), block)
end
returntuples = (bodyblock(filter(x->isa(x, Expr), findfunc(m.expr, :main))[end])
|> returns
returntuples = (bodyblock(filter(x->isa(x, Expr), findfunc(m.expr, :main))[end])
|> returns
.|> x-> x.args[1].args )
push!(returntuples[1], :((ρ=ρ, μ=μ, n=n)))
@ -75,7 +75,7 @@ expr = quote
function f(x, β)
# This .+ node is added so that we have something to grab onto
# in the metaprogramming. It is the ∀a .+(a) == a.
# in the metaprogramming. It is the ∀a .+(a) == a.
return .+(β[1].* x.^0)
end
@ -248,7 +248,7 @@ mstats = deepcopy(m)
poly(m)
# Some *generator elements* will come in handy for building elements of the transformation group.
# $T_x,T_1$ are *generators* for our group of transformations $T = \langle T_x, T_1 \rangle$. $T_1$ adds a constant to our polynomial and $T_x$ increments all the powers of the terms by 1. Any polynomial can be generated by these two operations. The proof of Horner's rule for evaluating $p(x)$ gives a construction for how to create $f(x,\beta) = p(x)$ from these two operations.
# $T_x,T_1$ are *generators* for our group of transformations $T = \langle T_x, T_1 \rangle$. $T_1$ adds a constant to our polynomial and $T_x$ increments all the powers of the terms by 1. Any polynomial can be generated by these two operations. The proof of Horner's rule for evaluating $p(x)$ gives a construction for how to create $f(x,\beta) = p(x)$ from these two operations.
@show Tₓ = Pow(1)
@show T₁ = AddConst()
@ -274,7 +274,7 @@ result′.r
#
# Mathematically, a pipeline is defined as $r_n = P(m_1,\dots,m_n, c_1,\dots,c_n)$ based on the recurrence,
#
# $r_0 = m_1(c)$ where $c$ is a constant value, and
# $r_0 = m_1(c)$ where $c$ is a constant value, and
#
# $r_i = m_i(c_i(r_{i-1}))$
#
@ -315,7 +315,7 @@ end
# This workflow connects the two models so that we simulate the agent based model and then perform a regression on the outputs.
P = Pipelines.Pipeline(deepcopy.([magents, mstats]),
[(m, args...) -> begin
[(m, args...) -> begin
Random.seed!(42)
results = Any[]
Mod = eval(m.expr)
@ -332,7 +332,7 @@ P = Pipelines.Pipeline(deepcopy.([magents, mstats]),
Base.invokelatest(Mod.main, data...) end
],
Any[(10)]
)
)
# Warning: Pipelines can only be run once. Recreate the pipeline and run it again if necessary.
@ -452,7 +452,7 @@ function connector(finalcounts, i, j)
return X,Y
end
P = Pipelines.Pipeline(deepcopy.([magents, mstats]),
[(m, args...) -> begin
[(m, args...) -> begin
Random.seed!(42)
results = Any[]
Mod = eval(m.expr)
@ -467,7 +467,7 @@ P = Pipelines.Pipeline(deepcopy.([magents, mstats]),
(m, results...) -> begin
data = connector(results..., 1, 4)
Mod = eval(m.expr)
Base.invokelatest(Mod.main, data...)
Base.invokelatest(Mod.main, data...)
end
],
Any[(10)]
@ -504,7 +504,7 @@ P.results[end][2]
# Here is the data we observed when running the first stage of the pipeline, stage two fits a polynomial to these observations
table = map(x->(round(x.params.ρ, digits=4), last(x.counts[end])), P.results[2][1]) |> sort
try
try
using Plots
catch
@warn "Plotting is not available, make a table"
@ -543,7 +543,7 @@ p
#
# As taught by the scientific computing education group [Software Carpentry](https://swcarpentry.github.io/), the best practice for composing scientific models is to have each component write files to disk and then use a workflow tool such as [Make](https://swcarpentry.github.io/make-novice/) to orchestrate the execution of the modeling scripts.
#
# An alternative approach is to design modeling frameworks for representing the models. The problem with this avenue becomes apparent when models are composed. The frameworks must be interoperable in order to make combined models. ModelTools avoids this problem by representing the models as code and manipulating the codes. The interoperation of two models is defined by user supplied functions in a fully featured programming language.
# An alternative approach is to design modeling frameworks for representing the models. The problem with this avenue becomes apparent when models are composed. The frameworks must be interoperable in order to make combined models. ModelTools avoids this problem by representing the models as code and manipulating the codes. The interoperation of two models is defined by user supplied functions in a fully featured programming language.
#
# SemanticModels.jl also provides transformations on these models that are grounded in category theory and abstract algebra. The concepts of category theory such as Functors and Product Categories allow us to build a general framework fit for any modeling task. In the language of category theory, the Pipelining functor on models commutes with the Product functor on transformations.
#

Loading…
Cancel
Save