# We use parsefile to load the model into an expression. The original model is an SEIR model which has 4 states suceptible, exposed, infected, and recovered. It has parameters $\beta, \gamma, \mu, \sigma$.

# We use parsefile to load the model into an expression. The original model is an SEIR model which has 4 states suceptible, exposed, infected, and recovered. It has parameters $\beta, \gamma, \mu, \sigma$.

# gensym gives us a unique name for the new function

g=gensym(argslist(model1.funcs[1])[1])

argslist(model1.funcs[1])[1]=g

g_func=gensym(argslist(model1.funcs[1])[1])

argslist(model1.funcs[1])[1]=g_func

# ## Model Augmentations often require new parameters

#

# When we add the population growth term to the SEIR model, we introduce a new parameter $r$

# that needs to be supplied to the model. One problem with approaches that require scientists

# to modify source code is the fact that adding the new features necessitates changes to the

# to modify source code is the fact that adding the new features necessitates changes to the

# APIs provided by the original author. SemanticModels.ModelTools provides a higher level API

# for making these changes that assist in propagating the necessary changes to the API.

#

# For example, in this code we need to add an argument to the entrypoint function `main` and

# For example, in this code we need to add an argument to the entrypoint function `main` and

# provide an anonymous function that conforms to the API that `DifferentialEquations` expects

# from its inputs.

mainx=findfunc(model1.expr,:main)[end]

pusharg!(mainx,:λ)

# An `ODEProblem` expects the user to provide a function $f(du, u, p, t)$ which takes the current fluxes, current system state, parameters, and current time as its arguments and updates the value of `du`. Since our new function `g` does not satisfy this interface, we need to introduce a wrapper function that does.

# An `ODEProblem` expects the user to provide a function $f(du, u, p, t)$ which takes the current fluxes, current system state, parameters, and current time as its arguments and updates the value of `du`. Since our new function `g_func` does not satisfy this interface, we need to introduce a wrapper function that does.

#

# Here is an instance where having a smart compiler helps julia. In many dynamic languages where this kind of metaprogramming would be easy, the runtime is not smart enough to inline these anonymous functions, which means that there is additional runtime performance overhead to metaporgramming like this. Julia's compiler (and LLVM) can inline these functions which drastically reduces that overhead.

# This simulation allows an epidemiologist to examine the effects of population growth on an SEIR disease outbreak. A brief analysis of this simulation shows that as you increase the population growth rate, you increase the final population of infected people. More sophisticated analysis could be employed to show something more interesting about this model.

#

# We have shown how you can use SemanticModels.jl to combine features of various ODE systems and solve them with a state of the art solver to increase the capabilities of a code that implements a scientific model. We call this combination process grafting and believe that it supports a frequent use case of scientific programming.

# As taught by the scientific computing education group [Software Carpentry](https://swcarpentry.github.io/), the best practice for composing scientific models is to have each component write files to disk and then use a workflow tool such as [Make](https://swcarpentry.github.io/make-novice/) to orchestrate the execution of the modeling scripts.

#

# An alternative approach is to design modeling frameworks for representing the models. The problem with this avenue becomes apparent when models are composed. The frameworks must be interoperable in order to make combined models. ModelTools avoids this problem by representing the models as code and manipulating the codes. The interoperation of two models is defined by user supplied functions in a fully featured programming language.

# An alternative approach is to design modeling frameworks for representing the models. The problem with this avenue becomes apparent when models are composed. The frameworks must be interoperable in order to make combined models. ModelTools avoids this problem by representing the models as code and manipulating the codes. The interoperation of two models is defined by user supplied functions in a fully featured programming language.

# Here is the baseline model, which is read in from a text file. You could instead of using `parsefile` use a `quote/end` block to code up the baseline model in this script.

# Here is the baseline model, which is read in from a text file. You could instead of using `parsefile` use a `quote/end` block to code up the baseline model in this script.

# This .+ node is added so that we have something to grab onto

# in the metaprogramming. It is the ∀a .+(a) == a.

# in the metaprogramming. It is the ∀a .+(a) == a.

return.+(β[1].*x.^0)

end

@ -248,7 +248,7 @@ mstats = deepcopy(m)

poly(m)

# Some *generator elements* will come in handy for building elements of the transformation group.

# $T_x,T_1$ are *generators* for our group of transformations $T = \langle T_x, T_1 \rangle$. $T_1$ adds a constant to our polynomial and $T_x$ increments all the powers of the terms by 1. Any polynomial can be generated by these two operations. The proof of Horner's rule for evaluating $p(x)$ gives a construction for how to create $f(x,\beta) = p(x)$ from these two operations.

# $T_x,T_1$ are *generators* for our group of transformations $T = \langle T_x, T_1 \rangle$. $T_1$ adds a constant to our polynomial and $T_x$ increments all the powers of the terms by 1. Any polynomial can be generated by these two operations. The proof of Horner's rule for evaluating $p(x)$ gives a construction for how to create $f(x,\beta) = p(x)$ from these two operations.

@showTₓ=Pow(1)

@showT₁=AddConst()

@ -274,7 +274,7 @@ result′.r

#

# Mathematically, a pipeline is defined as $r_n = P(m_1,\dots,m_n, c_1,\dots,c_n)$ based on the recurrence,

#

# $r_0 = m_1(c)$ where $c$ is a constant value, and

# $r_0 = m_1(c)$ where $c$ is a constant value, and

#

# $r_i = m_i(c_i(r_{i-1}))$

#

@ -315,7 +315,7 @@ end

# This workflow connects the two models so that we simulate the agent based model and then perform a regression on the outputs.

P=Pipelines.Pipeline(deepcopy.([magents,mstats]),

[(m,args...)->begin

[(m,args...)->begin

Random.seed!(42)

results=Any[]

Mod=eval(m.expr)

@ -332,7 +332,7 @@ P = Pipelines.Pipeline(deepcopy.([magents, mstats]),

Base.invokelatest(Mod.main,data...)end

],

Any[(10)]

)

)

# Warning: Pipelines can only be run once. Recreate the pipeline and run it again if necessary.

@ -452,7 +452,7 @@ function connector(finalcounts, i, j)

returnX,Y

end

P=Pipelines.Pipeline(deepcopy.([magents,mstats]),

[(m,args...)->begin

[(m,args...)->begin

Random.seed!(42)

results=Any[]

Mod=eval(m.expr)

@ -467,7 +467,7 @@ P = Pipelines.Pipeline(deepcopy.([magents, mstats]),

(m,results...)->begin

data=connector(results...,1,4)

Mod=eval(m.expr)

Base.invokelatest(Mod.main,data...)

Base.invokelatest(Mod.main,data...)

end

],

Any[(10)]

@ -504,7 +504,7 @@ P.results[end][2]

# Here is the data we observed when running the first stage of the pipeline, stage two fits a polynomial to these observations

# As taught by the scientific computing education group [Software Carpentry](https://swcarpentry.github.io/), the best practice for composing scientific models is to have each component write files to disk and then use a workflow tool such as [Make](https://swcarpentry.github.io/make-novice/) to orchestrate the execution of the modeling scripts.

#

# An alternative approach is to design modeling frameworks for representing the models. The problem with this avenue becomes apparent when models are composed. The frameworks must be interoperable in order to make combined models. ModelTools avoids this problem by representing the models as code and manipulating the codes. The interoperation of two models is defined by user supplied functions in a fully featured programming language.

# An alternative approach is to design modeling frameworks for representing the models. The problem with this avenue becomes apparent when models are composed. The frameworks must be interoperable in order to make combined models. ModelTools avoids this problem by representing the models as code and manipulating the codes. The interoperation of two models is defined by user supplied functions in a fully featured programming language.

#

# SemanticModels.jl also provides transformations on these models that are grounded in category theory and abstract algebra. The concepts of category theory such as Functors and Product Categories allow us to build a general framework fit for any modeling task. In the language of category theory, the Pipelining functor on models commutes with the Product functor on transformations.