Browse Source

add initial docs

pull/16/head
James Fairbanks 3 years ago
parent
commit
b85aff0580
  1. 4
      .gitignore
  2. 1
      Project.toml
  3. 216
      doc/index.html
  4. 36
      doc/make.jl
  5. 26
      doc/server.go
  6. 51
      doc/src/img/aske.dot
  7. 0
      doc/src/img/covar_fig1.jpg
  8. 50
      doc/src/img/election_metamodel.dot
  9. 54
      doc/src/img/extraction.dot
  10. 117
      doc/src/img/flu_pipeline.dot
  11. 0
      doc/src/img/flu_pipeline.pdf
  12. 29
      doc/src/img/formulas_sir.tex
  13. 56
      doc/src/img/metamodel.dot
  14. 112
      doc/src/img/pipeline.dot
  15. 2
      doc/src/img/schema.dot
  16. 0
      doc/src/img/schema.pdf
  17. 60
      doc/src/index.md
  18. 47
      doc/src/slides.md
  19. 25
      src/Semantics.jl
  20. 9
      src/diffeq.jl

4
.gitignore vendored

@ -265,4 +265,6 @@ TSWLatexianTemp* @@ -265,4 +265,6 @@ TSWLatexianTemp*
*-tags.tex
# standalone packages
*.sta
*.sta
doc/src/img/*.dot.png
/doc/src/img/*.dot.svg

1
Project.toml

@ -8,6 +8,7 @@ DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0" @@ -8,6 +8,7 @@ DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
DiffEqBase = "2b5f629d-d688-5b77-993f-72d75c75574e"
DifferentialEquations = "0c46a032-eb83-5123-abaf-570d42b7fbaa"
Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f"
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
GLM = "38e38edf-8417-5370-95a0-9cbb8c7f171a"
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c"

216
doc/index.html

@ -0,0 +1,216 @@ @@ -0,0 +1,216 @@
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml" lang="" xml:lang="">
<head>
<meta charset="utf-8" />
<meta name="generator" content="pandoc" />
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
<meta name="author" content="Christine Herlihy and Scott Appling and Erica Briscoe and James Fairbanks" />
<title>Automatic Scientific Knowledge Extraction: Architecture, Approaches, and Techniques</title>
<style type="text/css">
code{white-space: pre-wrap;}
span.smallcaps{font-variant: small-caps;}
span.underline{text-decoration: underline;}
div.column{display: inline-block; vertical-align: top; width: 50%;}
</style>
<!--[if lt IE 9]>
<script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
<![endif]-->
</head>
<body>
<header>
<h1 class="title">Automatic Scientific Knowledge Extraction: Architecture, Approaches, and Techniques</h1>
<p class="author">Christine Herlihy and Scott Appling and Erica Briscoe and James Fairbanks</p>
<p class="date">Dec 1, 2018</p>
</header>
<h1 id="introduction">Introduction</h1>
<p>The ASKE program aims to extract knowledge from the body of scientific work. Our view is that the best way to prove that you have extracted knowledge is to show that you can build new models out of the components of old models. The purpose of these new models may be to improve the fidelity of the original model with respect to the phenomenon of interest or to probe the mechanistic relationships between phenomena. Another use case for adapting models to new contexts is in order to use a simulation to provide data that cannot be obtained through experimentation or observation.</p>
<p>Our initial scientific modeling domain is the epidemiological study of disease spread, commonly called compartmental or SIR models. These models are compelling because the literature demonstrates the use of a repetitive model structure with many variations. The math represented therein spans both discrete and continuous equations, and the algorithms that solve these models are diverse. Additionally, this general model may apply to various national defense related phenomena, such as viruses on computer networks <span class="citation" data-cites="cohen_efficient_2003"></span> or misinformation in online media <span class="citation" data-cites="budak_limiting_2011"></span>.</p>
<p>The long term goal for our project is to reduce the labor cost of integrating models between scientists so that researchers can more efficiently build on the research of others. Such an effort is usefully informed by prior work and practices within the areas of software engineering and open source software development. Having the source code for a library or package is essential to building on it, but perhaps even more important are the affordances provided by open source licensing models and (social) software distribution systems that can significantly reduce the effort required to download others code and streamline execution from hours to minutes. This low barrier to entry is responsible for the proliferation of open source software that we see today. By extracting knowledge from scientific software and representing that knowledge, including model semantics, in knowledge graphs, along with leveraging type systems to conduct program analysis, we aim to increase the interoperability and development of scientific models at large scale.</p>
<h1 id="scientific-domain-and-relevant-papers">Scientific Domain and Relevant Papers</h1>
<p>We have focused our initial knowledge artifact gathering efforts on the scientific domain of epidemiology broadly defined, so as to render the diffusion of both disease and information in scope. Given that our ultimate goal is to automate the extraction of calls to epidemiological modeling libraries and functions, as well as the unitful parameters contained therein, we have conducted a preliminary literature review for the purpose of: (1) identifying a subset of papers published in this domain that leverage open-source epidemiological modeling libraries, and/or agent-based simulation packages, and make their code available to other researchers; and (2) identifying causally dependent research questions that could benefit from, and/or be addressed by the modification and/or chaining of individual models, as these questions can serve as foundational test cases for the meta-models we develop.</p>
<h2 id="papers-and-libraries">Papers and Libraries</h2>
<p>We began the literature review and corpus construction process by identifying a representative set of open-source software (OSS) frameworks for epidemiological modeling, and/or agent-based simulation, including: NDLib, EMOD, Pathogen, NetLogo, EpiModels, and FRED. These frameworks were selected for initial consideration based on: (1) the scientific domains and/or research questions they are intended to support (specifically, disease transmission and information diffusion); (2) the programming language(s) in which they are implemented (Julia, Python, R, C++); and (3) the extent to which they have been used in peer-reviewed publications that include links to their source code. We provide a brief overview of the main components of each package below, as well as commentary on the frequency with which each package has been used in relevant published works.</p>
<h3 id="ndlib">NDLib</h3>
<p>NDLib is an open-source package developed by a research team from the Knowledge Discovery and Data Mining Laboratory (KDD-lab) at the University of Pisa, and written in Python on top of the NetworkX library. NDLib is intended to aid social scientists, computer scientists, and biologists in modeling/simulating the dynamics of diffusion processes in social, biological, and infrastructure networks <span class="citation" data-cites="NDlib1 NetworkX"></span>. NDLib includes built-in implementations of many common epidemiological models (e.g., SIR, SEIR, SEIS, etc.), as well as models of opinion dynamics (e.g., Voter, Q-Voter, Majority Rule, etc.). In addition, there are several features intended to make NDLib available to non-developer domain experts, including an abstract Network Diffusion Query Language (NDQL), an experiment server that is query-able through a RESTful API to allow for remote execution, and a web-based GUI that can be used to visualize and run epidemic simulations <span class="citation" data-cites="NDlib1"></span>.</p>
<p>The primary disadvantage of NDLib is that it is relatively new: the associated repository on GitHub was created in 2016, with the majority of commits beginning in 2017; two supporting software system architecture papers were published in 2017-2018 <span class="citation" data-cites="ndlibDocs NDlib1 NDlib2"></span>. As such, while there are several factors which bode well for future adoption (popularity of Python for data science workflows and computer science education; user-friendliness of the package, particularly for users already familiar with NetworkX, etc.), the majority of published works citing NDLib are papers written by the package authors themselves, and focus on information diffusion.</p>
<h3 id="epimodels">Epimodels</h3>
<p>EpiModel is an R package, written by researchers at Emory University and The University of Washington, that provides tools for simulating and analyzing mathematical models of infectious disease dynamics. Supported epidemic model classes include deterministic compartmental models, stochastic individual contact models, and stochastic network models. Disease types include SI, SIR, and SIS epidemics with and without demography, with utilities available for expansion to construct and simulate epidemic models of arbitrary complexity. The network model class is based on the statistical framework of temporal exponential random graph models (ERGMs) implementated in the Statnet suite of software for R. <span class="citation" data-cites="JSSv084i08"></span> The library is widely used and the source code is available. The library would make a great addition to the system we are building upon integration. EpiModels has received several grants from the National Institutes of Health (NIH) for funding its development. There are several publications utilizing the library at highly elite research journals, including PLoS ONE and Infectious Diseases, as well as the Journal of Statistical Software.</p>
<h3 id="netlogo">NetLogo</h3>
<p>NetLogo, according to the User Manual, is a programmable modeling environment for simulating natural and social phenomena. It was authored by Uri Wilensky in 1999 and has been in continuous development ever since at the Center for Connected Learning and Computer-Based Modeling. NetLogo is particularly well suited for modeling complex systems developing over time. Modelers can give instructions to hundreds or thousands of “agents” all operating independently. This makes it possible to explore the connection between the micro-level behavior of individuals and the macro-level patterns that emerge from their interaction. NetLogo lets students open simulations and “play” with them, exploring their behavior under various conditions. It is also an authoring environment which enables students, teachers and curriculum developers to create their own models. NetLogo is simple enough for students and teachers, yet advanced enough to serve as a powerful tool for researchers in many fields. NetLogo has extensive documentation and tutorials. It also comes with the Models Library, a large collection of pre-written simulations that can be used and modified. These simulations address content areas in the natural and social sciences including biology and medicine, physics and chemistry, mathematics and computer science, and economics and social psychology. Several model-based inquiry curricula using NetLogo are available and more are under development. NetLogo is the next generation of the series of multi-agent modeling languages including StarLogo and StarLogoT. NetLogo runs on the Java Virtual Machine, so it works on all major platforms (Mac, Windows, Linux, et al). It is run as a desktop application. Command line operation is also supported. <span class="citation" data-cites="tisue2004netlogo nlweb"></span> NetLogo has been widely used by the simulation research community at-large for well over nearly two decades. Although there is a rich literature that mentions its use, it may be more difficult to identify scripts that have been authored and that pair with published research papers using the modeling library due to the amount of time that has passed and that researcher may no longer monitor the email addresses listed on their publications for various reasons.</p>
<h3 id="emod">EMOD</h3>
<p>Epidemiological MODeling (EMOD) is an open-source agent-based modeling software package developed by the Institute for Disease Modeling (IDM), and written in C++ <span class="citation" data-cites="emodRepo emodDocs"></span>. The primary use case that EMOD is intended to support is the stochastic agent-based modeling of disease transmission over space and time. EMOD has built-in support for modeling malaria, HIV, tuberculosis, sexually transmitted infections (STIs), and vector-borne diseases; in addition, a generic modeling class is provided, which can be inherited from and/or modified to support the modeling of diseases that are not explicitly supported <span class="citation" data-cites="emodDocs emodRepo"></span>.</p>
<p>The documentation provided is thorough, and the associated GitHub repo has commits starting in July 2015; the most recent commit was made in July 2018 <span class="citation" data-cites="emodRepo emodDocs"></span>. EMOD also includes a regression test suite, so that stochastic simulation results can be compared to a reference set of results and assessed for statistical similarity within an acceptable range. In addition, EMOD leverages Message Passing Interface (MPI) to support within- and among-simulation(s)-level parallelization, and outputs results as JSON blobs. The IDM conducts research, and as such, there are a relatively large number of publications associated with the institute that leverage EMOD and make their data and code accessible. One potential drawback of EMOD relative to more generic agent-based modeling packages is that domain-wise, coverage is heavily slanted toward epidemiological models; built-in support for information diffusion models is not included.</p>
<h3 id="pathogen">Pathogen</h3>
<p>Pathogen is an open-source epidemiological modeling package written in Julia <span class="citation" data-cites="pathogenRepo"></span>. Pathogen is intended to allow researchers to model the spread of infectious disease using stochastic, individual-level simulations, and perform Bayesian inference with respect to transmission pathways <span class="citation" data-cites="pathogenRepo"></span>. Pathogen includes built-in support for SEIR, SEI, SIR, and SI models, and also includes example Jupyter notebooks and methods to visualize simulation results (e.g., disease spread over a graph-based network, where vertices represent individual agents). With respect to the maturity of the package, the first commit to an alpha version of Pathogen occurred in 2015, and the master branch contains commits within the last month (e.g., November 2018) <span class="citation" data-cites="pathogenRepo"></span>. Pathogen is appealing because it could be integrated into our Julia-based meta-modeling approach without incurring the overhead associated with wrapping non-Julia-based packages. However, one of the potential disadvantages of the Pathogen package is that there is no associated software or system architecture paper; as such, it is difficult to locate papers that use this package.</p>
<h3 id="fred">FRED</h3>
<p>FRED, which stands for a Framework for Reconstructing Epidemic Dynamics, is an open-source, agent-based modeling software package written in C++, developed by the Pitt Public Health Dynamics Laboratory for the purpose of modeling the spread of disease(s) and assessing the impact of public health intervention(s) (e.g., vaccination programs, school closures, etc.) <span class="citation" data-cites="pittir24611 fredRepo"></span>. FRED is notable for its use of synthetic populations that are based on U.S. Census Data, and as such, allow for the instantiation of agents whose spatiotemporal and sociodemographic characteristics, including household membership and location, as well as income level and patterns of employment and/or school attendance, reflect the actual distribution of the population in the selected geographic area(s) within the United States <span class="citation" data-cites="pittir24611"></span>. FRED is modular and paramterized to allow for support of different diseases, and the associated software paper, as well as the GitHub repository, provide clear, robust documentation for use. One advantage of FRED relative to some of the other packages we have reviewed is that it is relatively mature. Commits range from 2014-2016, and the associated software paper was published in 2013; as such, epidemiology researchers have had more time to become familiar with the software and cite it in their works <span class="citation" data-cites="pittir24611 fredRepo"></span>. A related potential disadvantage is that FRED does not appear to be under active development <span class="citation" data-cites="pittir24611 fredRepo"></span>.</p>
<h2 id="evaluation">Evaluation</h2>
<p>The packages outlined in the preceding section are all open-source, and written in Turing-complete programming languages; thus, we believe any subset of them would satisfy the open-source and complexity requirements for artifact selection outlined in the solicitation. As such, the primary dimensions along which we have evaluated and compared our initial set of packages include: (1) the frequency with which a given package has been cited in published papers that include links or references to their code; (2) the potential trend of increasing adoption/citation over the near-to-medium term; (3) the existence of thorough documentation; and (4) the feasibility of cross-platform and/or cross-domain integration.</p>
<p>With respect to the selection of specific papers and associated knowledge artifacts, our intent at this point in the knowledge discovery process is to prioritize the packages outlined above based on their relative maturity, and proceed to conduct additional, augmenting bibliometric exploration in the months ahead. Our view is that EMOD, Epimodels, NetLogo, and FRED can be considered established packages, given their relative maturity and the relative availability of published papers citing these packages. Pathogen and NDLib can be considered newer packages, in that they are relatively new and lack citations, but have several positive features that bode well for an uptick in use and associated citation in the near- to medium-term. It is worth noting that while the established packages provide a larger corpus of work from which to select a set of knowledge artifacts, the newer packages are more modern, and as such, we expect them to be easier to integrate into the type of data science/meta-modeling pipelines we will develop. Additionally, we note that should the absence of published works prove to be an obstacle for a package we ultimately decide to support via integration into our framework, we are able to generate feasible examples by writing them ourselves.</p>
<p>For purposes of development and testing, we will need to use simple or contrived models that are presented in a homogeneous framework. Pedagogical textbooks <span class="citation" data-cites="voit_first_2012"></span> and lecture notes<a href="#fn1" class="footnote-ref" id="fnref1"><sup>1</sup></a> will be a resource for these simple models that are well characterized.</p>
<h1 id="information-extraction">Information Extraction</h1>
<p>In order to construct the knowledge graph that we will traverse to generate metamodel directed acyclic graphs (DAGs), we will begin by defining a set of knowledge artifacts and implementing (in both code and process/system design) an iterative, expert-in-the-loop knowledge extraction pipeline. The term “knowledge artifacts” is intended to refer to the set of open-source software packages (e.g., their code-bases), as well as a curated subset of published papers in the scientific domains of epidemiology and/or information diffusion that cite one or more of these packages and make their own code and/or data (if relevant) freely available. Our approach to the selection of packages and papers has been outlined in the preceding section, and is understood to be both iterative and flexible to the incorporation of additional criteria/constraints, and/or the inclusion/exclusion of (additional) works as the knowledge discovery process proceeds.</p>
<p>Given a set of knowledge artifacts, we plan to proceed with information extraction as follows: First, we will leverage an expert system’s based approach to derive rules to automatically recognize and extract relevant phenomena; see Table <a href="#table:info_extract" data-reference-type="ref" data-reference="table:info_extract">[table:info_extract]</a> for details. The rules will be built using the outputs of language parsers and applied to specific areas of source code that meet other heuristic criteria e.g. length, association with other other functions/methods. Next, we will also experiment with supervised approaches (mentioned in our proposal) and utilize information from static code analysis tools, programming language parsers, and lexical and orthographic features of the source code and documentation. For example, variables that are calculated as a result of running a for loop within code and whose characters, lexically speaking, occur within source code documentation and or associated research publications are likely related to specific models being proposed or extended in publications.</p>
<p>We will also be performing natural language parsing <span class="citation" data-cites="manning"></span> on research papers themselves to provide cues for how we perform information extraction on associated scripts with references to known libraries. For example, a research paper will reference a library that our system is able to reason about and extend models from and so if no known library is identified then the system will not attempt to engage in further pipeline steps. For example, a paper that leverages the EpiModels library will contain references to the EpiModels library itself and in one set of cases, reference a particular family of models e.g. “Stochastic Individual Contact Models”. The paper will likely not mention any references to actual library functions/methods that were used but will reference particular circumstances related to using a particular model such as e.g. model parameters that were the focus of the research paper’s investigation. These kinds of information will be used in supervised learning to build the case for different kinds of extractions. In order to do supervised learning, we will be developing ground truth annotations to train models with. To gain a better sense of the kinds of knowledge artifacts we will be working with, below we present an example paper that a metamodel can be built from and from whence information can be extracted to help in the creation of that metamodel.</p>
<h2 id="epimodels-example">EpiModels Example</h2>
<p>In <span class="citation" data-cites="doi:10.1111/oik.04527"></span> the authors utilize the EpiModels library and provide scripts for running the experiments they describe. We believe this is an example of the kind of material we will be able to perform useful information extractions on to inform the development of metamodels. Figure <a href="#fig:covar_paper1" data-reference-type="ref" data-reference="fig:covar_paper1">[fig:covar_paper1]</a> is an example of script code from <span class="citation" data-cites="doi:10.1111/oik.04527"></span>:</p>
<figure>
<img src="covar_fig1.jpg" alt="Example script excerpt associated with setting parameters for use in an ERGM model implemented by EpiModels library." style="width:70.0%" /><figcaption>Example script excerpt associated with <span class="citation" data-cites="doi:10.1111/oik.04527"></span> setting parameters for use in an ERGM model implemented by EpiModels library.</figcaption>
</figure>
<p><span id="fig:covar_paper1" label="fig:covar_paper1">[fig:covar_paper1]</span></p>
<p>Table <a href="#table:info_extract" data-reference-type="ref" data-reference="table:info_extract">[table:info_extract]</a> is a non-exhaustive list of the kinds of information extractions we are currently planning and the purposes they serve in supporting later steps:</p>
<table>
<caption>Planned information extractions. A non-exhaustive list of information extractions, their purposes, and sources.</caption>
<thead>
<tr class="header">
<th style="text-align: left;">Extraction Type</th>
<th style="text-align: left;">Description</th>
<th style="text-align: left;">Sources</th>
</tr>
</thead>
<tbody>
<tr class="odd">
<td style="text-align: left;">Code References</td>
<td style="text-align: left;">Creation and selection of metamodels to extend or utilize depending on user goals</td>
<td style="text-align: left;">Papers, Scripts</td>
</tr>
<tr class="even">
<td style="text-align: left;">Model Parameters</td>
<td style="text-align: left;">Natural language variable names, function parameters</td>
<td style="text-align: left;">Papers, Scripts</td>
</tr>
<tr class="odd">
<td style="text-align: left;">Function Names</td>
<td style="text-align: left;">Names of library model functions used to run experiments in papers</td>
<td style="text-align: left;">Scripts</td>
</tr>
<tr class="even">
<td style="text-align: left;">Library Names</td>
<td style="text-align: left;">Include statements to use specific libraries. Identification of libraries</td>
<td style="text-align: left;">Scripts</td>
</tr>
</tbody>
</table>
<p><span id="table:info_extract" label="table:info_extract">[table:info_extract]</span></p>
<p>The information extractions we produce here will be included as annotations in the knowledge representations we describe next.</p>
<h1 id="knowledge-representation">Knowledge Representation</h1>
<p>On the topic of dimensionality / complexity reduction (in an entropic sense) and knowledge representation: (1) we will begin by passing the code associated with each knowledge artifact through static analysis tools. Static analysis tools include linters intended to help programmers debug their code and correct syntax, stylistic, and/or security-related errors. As the knowledge artifacts in our set are derived from already published works, we do not anticipate syntax errors. Rather, our objective is to use the abstract syntax trees (ASTs), call graphs, control flow graphs, and/or dependency graphs that are produced during static analysis to extract both discrete model instantiation(s) (along with arguments, which can be mapped back to parameters which may have associated metadata, including required object type and units), as well as sequential function call information.</p>
<p>The former can be thought of as contributing a connected subgraph to the knowledge graph, such that <span class="math inline"><em>G</em><sub><em>i</em></sub><em>G</em></span>, in which model classes and variable data/unit types are represented as vertices and connected by directed “requires/accepts” edges. The latter can be thought of as contributing information about the mathematical and/or domain-specific legality and/or frequency with which a given subset of model types can be sequentially linked; this information can be used to weight edges connecting model nodes in the knowledge graph.</p>
<p>The knowledge graph approach will help identify relevant pieces of context. For example the domain of a scientific paper or code will be necessary for correct resolution of scientific terms which are used to refer to multiple phenomena in different contexts. For example, in a paper about biological cell signalling pathways the term “death” is likely to refer to the death of individual cells, while in a paper about disease prevalence in at-risk populations, the same term is likely referring to the death of individual people. This will be further complicated by figurative language in the expository aspects of paper where “death” might be used as a metaphor when a cultural behavior or meme “dies out” because people stop spreading the behavior to their social contacts.</p>
<h2 id="schema-design">Schema Design</h2>
<p>We will represent the information extracted from the artifacts using a knowledge graph. And while knowledge graphs are very flexible in how they represent data, it helps to have a schema describing the vertex and edge types along with the metadata that will be stored on the vertices and edges.</p>
<p>In our initial approach, the number of libraries that models can be implemented with will be small enough that schema design can be done by hand. We expect that this schema will evolve as features are added to the system, but remain mostly stable as new libraries, models, papers, and artifacts are added.</p>
<p>When a new paper/code comes in, we will extract edges and vertices automatically with code which represents those edges and vertices in the predefined schema.</p>
<p>Many of the connections will be from artifacts to their components, which will connect to concepts. When papers are connected to other papers, they are connected indirectly (e.g., via other vertices), except for edges that represent citations directly between papers.</p>
<figure>
<embed src="schema.pdf" /><figcaption>An example of the knowledge graph illustrating the nature of the schema.<span label="fig:schema."></span></figcaption>
</figure>
<p>It is an open question for this research whether the knowledge graph should contain models with the parameters bound to values, or the general concept of a model with parameters available for instantiation. Our initial approach will be to model both the general concept of a model such as <code>HookesLawModel</code> along with the specific instantiation <code>HookesLawModel{k=5.3}</code> from specific artifacts.</p>
<h2 id="data-sets-in-the-knowledge-graph">Data Sets in the Knowledge Graph</h2>
<p>A big component of how papers refer to the same physical phenomenon is that they use the same data sets. These common datasets which become benchmarks that are leveraged widely in the research community are highly concentrated in a small number of widely cited papers. This is good for our task because we know that if two papers use the same dataset then they are talking about the same phenomenon.</p>
<p>The most direct overlap of datasets is to go through papers that provide the canonical source for that dataset. But we can also think of similarity of datasets in terms of the schema(s) of the datasets. This requires a more detailed dataset representation than just the column names commonly found on CSV files. Google’s open dataset search has done a lot of the work necessary for annotating the semantics for features of datasets. The DataDeps.jl system includes programmatic ways to access this information for many of the common open science data access protocols<a href="#fn2" class="footnote-ref" id="fnref2"><sup>2</sup></a> By linking dataset feature (column) names to knowledge graph concepts, we will be able to compare datasets for similarity and conceptual overlap. The fact that two models are connected to the same dataset or concept is an important indicator that the two models are compatible or interchangeable.</p>
<h2 id="schema.org">Schema.org</h2>
<p>Schema.org is one of the largest and most diverse knowledge graph systems.</p>
<p>It includes virtually no coverage of scientific concepts. There are no schema.org nodes for Variable, Function, Equation. The most relevant schema.org concepts are given in the following list.</p>
<ul>
<li><p><a href="https://schema.org/ScholarlyArticle" class="uri">https://schema.org/ScholarlyArticle</a></p></li>
<li><p><a href="https://schema.org/SoftwareSourceCode" class="uri">https://schema.org/SoftwareSourceCode</a>.</p></li>
<li><p><a href="https://schema.org/ComputerLanguage" class="uri">https://schema.org/ComputerLanguage</a></p></li>
<li><p><a href="https://schema.org/variableMeasured" class="uri">https://schema.org/variableMeasured</a></p></li>
<li><p><a href="https://meta.schema.org/Property" class="uri">https://meta.schema.org/Property</a></p></li>
<li><p><a href="https://schema.org/DataFeedItem" class="uri">https://schema.org/DataFeedItem</a></p></li>
<li><p><a href="https://schema.org/Quantity" class="uri">https://schema.org/Quantity</a> which has more specific types</p>
<ul>
<li><p><a href="https://schema.org/Distance" class="uri">https://schema.org/Distance</a></p></li>
<li><p><a href="https://schema.org/Duration" class="uri">https://schema.org/Duration</a></p></li>
<li><p><a href="https://schema.org/Energy" class="uri">https://schema.org/Energy</a></p></li>
<li><p><a href="https://schema.org/Mass" class="uri">https://schema.org/Mass</a></p></li>
</ul></li>
</ul>
<p>The focus of schema.org is driven by its adoption in the web document community. Schema.org concepts are used for tagging documents in order for search engines or automated information extraction systems to find structured information in documents. Often it is catalogue or indexing sites that use schema.org concepts to describe the items or documents in their collections.</p>
<p>The lack of coverage for scientific concepts is surprising given that we think of academic research on publication mining to be focused on their own fields, for example papers about mining bibliographic databases often use examples of database researchers themselves.</p>
<p>You could model the relationships between papers using this schema.org schema. But that takes place at the bibliometric level instead of the the model semantics level. There are no entries for expressing that these two papers solve the same equation. Or model the same physical phenomena. Of course schema.org is organized so that everything can be expressed as a <a href="https://schema.org/Thing" class="uri">https://schema.org/Thing</a>, but there is no explicit representation for these concepts. There is a Schema.org schema for heath and life science <a href="https://health-lifesci.schema.org/" class="uri">https://health-lifesci.schema.org/</a>. As we define the schema of our knowledge graph, we will link up with the schema.org concepts as much as possible and could add an extension to the schema.org in order to represent scientific concepts.</p>
<h1 id="model-representation-and-execution">Model Representation and Execution</h1>
<p>Representation of models occurs at four levels:</p>
<ul>
<li><p><strong>Executable</strong>: the level of machine or byte-code instructions</p></li>
<li><p><strong>Lexical</strong>: the tradition code representation assignment, functions, and loops</p></li>
<li><p><strong>Semantic</strong>: a declarative language or computation graph representation with nodes linked to the knowledge graph</p></li>
<li><p><strong>Human</strong>: a description in natural language as in a research paper or textbook</p></li>
</ul>
<p>The current method of scientific knowledge extraction is to take a Human level description and have a graduate student build a Lexical level description by reading papers and implementing new codes. We aim to introduce the Semantic level which is normally stored only in the brains of human scientists, but must be explicitly represented in machines in order to automate scientific knowledge extraction. A scientific model represented at the Semantic level will be easy to modify computationally and be describable for the automatic description generation component. The Semantic level representation of a model is a computation DAG. One possible description is to represent the DAG in a human-friendly way, such as in Figure <a href="#fig:flu" data-reference-type="ref" data-reference="fig:flu">[fig:flu]</a>.</p>
<figure>
<embed src="flu_pipeline.pdf" /><figcaption>An example pipeline and knowledge graph elements for a flu response model.<span label="fig:flu"></span></figcaption>
</figure>
<h2 id="scientific-workflows-pipelines">Scientific Workflows (Pipelines)</h2>
<p>Our approach will need to be differentiated from scientific workflow managers that are based on conditional evaluation tools like Make. Some examples include <a href="https://swcarpentry.github.io/make-novice/">Make for scientists</a>, <a href="http://scipipe.org/">Scipipe</a>, and <a href="https://galaxyproject.org/">the Galaxy project</a>. These scientific workflows focus on representing the relationships between intermediate data products without getting into the model semantics. While scientific workflow managers are a useful tool for organizing the work of a scientist, they do not have a particularly detailed representation of the modeling tasks. Workflow tools generally accept the UNIX wisdom that text is the universal interface and communicate between programs using files on disk or in memory pipes, sockets, or channels that contain lines of text.</p>
<p>Our approach will track a higher fidelity representation of the model semantics in order to enable computational reasoning over the viability of combined models. Ideas from static analysis of computer programs will enable better verification of metamodels before we run them.</p>
<h2 id="metamodels-as-computation-graphs">Metamodels as Computation Graphs</h2>
<p>Our position is that if you have a task currently solved with a general purpose programming language, you cannot replace that solution with anything less powerful than a general purpose programming language. The set of scientific modeling codes is just too diverse, with each part a custom solution, to be solved with a limited scope solution like a declarative model specification. Thus we embed our solution into the general purpose programming language Julia.</p>
<p>We use high level programming techniques such as abstract types and multiple dispatch in order to create a hierarchical structure to represent a model composed of sub-models. These hierarchies can lead to static or dynamic DAGs of models. Every system that relies on building an execution graph and then executing it finds the need for dynamically generated DAGs at some point. For sufficiently complicated systems, the designer does not know the set of nodes and dependencies until execution has started. Examples include recursive usage of the make build tool, which lead to techniques such as <code>cmake</code>, <code>Luigi</code>, and <code>Airflow</code>, and Deep Learning which has both static and dynamic computation graph implementations for example TensorFlow and PyTorch. There is a tradeoff between the static analysis that helps optimize and validate static representations and the ease of use of dynamic representations. We will explore this tradeoff as we implement the system.</p>
<p>For a thorough example how to use our library to build a metamodel see the notebook <code>FluExample.ipynb</code>. This example uses Julia types system to build a model DAG that represents all of the component models in a machine readable form. This DAG is represented in Figure <a href="#fig:flu" data-reference-type="ref" data-reference="fig:flu">[fig:flu]</a>. Code snippets and rendered plots appear in the notebook.</p>
<h2 id="metamodel-constraints">Metamodel Constraints</h2>
<p>When assembling a metamodel, it is important to eliminate possible combinations of models that are scientifically or logically invalid. One type of constraint is provided by units and dimensional analysis. Our flu example pipeline uses <a href="Unitful.jl">https://github.com/ajkeller34/Unitful.jl</a> to represent the quantities in the models including <span class="math inline"><em>C</em>,<em>s</em>,<em>d</em>,<em>p</em><em>e</em><em>r</em><em>s</em><em>o</em><em>n</em></span> for Celsius, second, day, and person. While <span class="math inline"><em>C</em>,<em>s</em>,<em>d</em></span> are SI defined units that come with Unitful.jl, person is a user defined type that was created for this model. These unit constraints enable a dynamic analysis tool (the Julia runtime system) to invalidate combinations of models that fail to use unitful numbers correctly, i.e., in accordance with the rules of dimensional analysis taught in high school chemistry and physics. In order to make rigorous combinations of models, more information will need to be captured about the component models. It is necessary but not sufficient for a metamodel to be dimensionally consistent. We will investigate the additional constraints necessary to check metamodels for correctness.</p>
<h2 id="metamodel-transformations">Metamodel Transformations</h2>
<p>Metamodel transformations describe high-level operations the system will perform based on the user’s request and the information available to it in conjunction with using a particular set of open source libraries; examples of these are as follows:</p>
<ol>
<li><p>utilize an existing metamodel and modifying parameters;</p></li>
<li><p>modifying the functional form in a model such as adding terms to an equation</p></li>
<li><p>changing the structure of the metamodel by modifying the structure of the computation graph</p></li>
<li><p>introducing new nodes to the model<a href="#fn3" class="footnote-ref" id="fnref3"><sup>3</sup></a></p></li>
</ol>
<h2 id="types">Types</h2>
<p>This project leverages the Julia type system and code generation toolchain extensively.</p>
<p>Many Julia libraries define and abstract interface for representing the problems they can solve for example</p>
<ul>
<li><p>DifferentialEquations.jl <a href="https://github.com/JuliaDiffEq/DiffEqBase.jl" class="uri">https://github.com/JuliaDiffEq/DiffEqBase.jl</a> defines <code>DiscreteProblem</code>, <code>ODEProblem</code>, <code>SDEProblem</code>, <code>DAEProblem</code> which represent different kinds of differential equation models that can be used to represent physical phenomena. Higher level concepts such as a <code>MonteCarloProblem</code> can be composed of subproblems in order to represent more complex computations. For example a <code>MonteCarloProblem</code> can be used to represent situations where the parameters or initial conditions of an <code>ODEProblem</code> are random variables, and a scientist aims to interrogate the distribution of the solution to the ODE over that distribution of input.</p></li>
<li><p>MathematicalSystems.jl <a href="https://juliareach.github.io/MathematicalSystems.jl/latest/lib/types.html" class="uri">https://juliareach.github.io/MathematicalSystems.jl/latest/lib/types.html</a> defines an interface for dynamical systems and controls such as <code>LinearControlContinuousSystem</code> and <code>ConstrainedPolynomialContinuousSystem</code> which can be used to represent Dynamical Systems including hybrid systems which combine discrete and continuous phenomena. Hybrid systems are of particular interest to scientists examining complex phenomena at the interface of human designed systems and natural phenomena.</p></li>
<li><p>Another library for dynamical systems includes <a href="https://juliadynamics.github.io/DynamicalSystems.jl/" class="uri">https://juliadynamics.github.io/DynamicalSystems.jl/</a>, which takes a timeseries and physics approach to dynamical systems as compared to the engineering and controls approach taken in MathematicalSystems.jl.</p></li>
<li><p>MADs <a href="http://madsjulia.github.io/Mads.jl/" class="uri">http://madsjulia.github.io/Mads.jl/</a> offers a modeling framework that supports many of the model analysis and decision support tasks that will need to be performed on metamodels that we create.</p></li>
</ul>
<p>Each of these libraries will need to be integrated into the system by understanding the types that are used to represent problems and developing constraints for how to create hierarchies of problems that fit together. We think that the number of libraries that the system understands will be small enough that the developers can do a small amount of work per library to integrate it into the system, but that the number of papers will be too large for manual tasks per paper.</p>
<p>When a new paper or code snippet is ingested by the system, we may need to generate new leaf types for that paper automatically.</p>
<h2 id="user-interface">User Interface</h2>
<p>Our system is used by expert scientists who want to reduce their time spent writing code and plumbing models together. As an input it would take a set of things known or measured by the scientist and a set of variables or quantities of interest that are unknown. The output of the program is a program that calculates the unknowns as a function of the known input(s) provided by the user, potentially with holes that require expert knowledge to fill in.</p>
<h2 id="generating-new-models">Generating new models</h2>
<p>We will use metaprogramming to build a library that takes data structures, derived partially using information previously extracted from research publication and associated scripts, which represent models as input and transform and combine them into new models, then generates executable code based on the these new, potentially larger models.</p>
<p>One foreseeable technical risk is that the Julia compiler and type inference mechanism could be overwhelmed by the number of methods and types that our system defines. In a fully static language like C++ the number of types defined in a program is fixed at compile time and the compile burden is paid once for many executions of the program. In a fully dynamic language like Python, there is no compilation time and the cost of type checking is paid at run time. However, in Julia, there is both compile time analysis and run time type computations.</p>
<p>In Julia, changing the argument types to a function causes a round of LLVM compilation for the new method of that function. When using Unitful numbers in calculations, changes to the units of the numbers create new types and thus additional compile time overhead. This overhead is necessary to provide unitful numbers that are no slower for calculations than bare bitstypes provided by the processor. As we push more information into the type system, this tradeoff of additional compiler overhead will need to be managed.</p>
<h1 id="validation">Validation</h1>
<p>There are many steps to this process and at each step there is a different process for validating the system.</p>
<ul>
<li><p><em>Extraction of knowledge elements from artifacts</em>: we will need to assess the accuracy of knowledge elements extracted from text, code and documentation to ensure that the knowledge graph is correct. This will require some manual annotation of data from artifacts and quality measures such as precision and recall. The precision is the number of edges in the knowledge graph that are correct, and the recall is the fraction of correct edges that were recovered by the information extraction approach.</p></li>
<li><p><em>Metamodel construction</em>: once we have a knowledge graph, we will need to ensure that combinations of metamodels are valid, and optimal. We will aim to produce the simpliest metamodel that relates the queried concepts this will be measured in terms of number of metamodel nodes, number of metamodel dependency onnections, number of adjustment or transformation functions. We will design test cases that increase in complexity from pipelines with no need to transform variables, to pipelines with variable transformations, to directed acyclic graphs (DAGs).</p></li>
<li><p><em>Model Accuracy</em>: as the metamodels are combinations of models that are imperfect, there will be compounding error within the metamodel. We will need to validate that our metamodel execution engine does not add error unnecessarily. This will involve numerical accuracy related to finite precision arithmetic, as well as statistical accuracy related to the ability to learn parameters from data. Additionally, since we are by necessity doing some amount of domain adaptation when reusing models, we will need to quantify the domain adaptation error generated by applying a model developed for one context in a different context. These components of errors can be thought of as compounding loss in a signal processing system where each component of the design introduces loss with a different response to the input.</p></li>
</ul>
<p>Our view is to analogize the metamodel construction error and the model accuracy to the error and residual in numerical solvers. For a given root finding problem, such as <span class="math inline"><em>f</em>(<em>x</em>)=0</span> solve for <span class="math inline"><em>x</em></span> the most common way to measure the quality of the solution is to measure both the error and the residual. The error is defined as <span class="math inline"><em>x</em><em>x</em><sup></sup></span>, which is the difference from the correct solution in the domain of <span class="math inline"><em>x</em></span> and the residual is <span class="math inline"><em>f</em>(<em>x</em>)<em>f</em>(<em>x</em><sup></sup>)∣</span> or the difference from the correct solution in the codomain. We will frame our validation in terms of error and residual, where the error is how close did we get to the best metamodel and residual is the difference between the observed versus predicted phenomena.</p>
<p>These techniques need to generate simple, explainable models for physical phenomena that are easy for scientists to generate, probe, and understand, while being the best possible model of the phenomena under investigation.</p>
<h1 id="next-steps">Next Steps</h1>
<p>Our intended path forward following the review of this report is as follows:</p>
<ol>
<li><p>Incorporation of feedback received from DARPA PM, including information related to: the types of papers we consider to be in scope (e.g., those with and without source code); domain coverage and desired extensibility; expressed preference for inclusion/exclusion of particular package(s) and/or knowledge artifact(s).</p></li>
<li><p>Construction of a proof-of-concept version of our knowledge graph and end-to-end pipeline, in which we begin with a motivating example and supporting documents (e.g., natural language descriptions of the research questions and mathematical relationships modeled; source code), show how these documents can be used to construct a knowledge graph, and show how traversal of this knowledge graph can approximately reproduce a hand-written Julia meta-modeling pipeline. The flu example outlined above is our intended motivating example, although we are open to tailoring this motivating example to domain(s) and/or research questions that are of interest to DARPA.</p></li>
<li><p>A feature of the system not yet explored is automatic transformation of models at the Semantic Level. These transformations will be developed in accordance with interface expectations from downstream consumers including the TA2 performers.</p></li>
</ol>
<p>Executing on this proof-of-concept deliverable will allow us to experience the iterative development and research life-cycle that end-users of our system will ultimately participate in. We anticipate that this process will help us to identify gaps in our knowledge and framing of the problem at hand, and/or shortcomings in our methodological approach that we can enhance through the inclusion of curated domain-expert knowledge (e.g., to supplement the lexical nodes and edges we are able to extract from source code). In addition, we expect the differences between our hand-produced meta-model and our system-produced meta-model to be informative and interpretable as feedback which can help us to improve the system architecture and associated user experience. It’s also worth noting that over the medium term, we anticipate that holes in the knowledge graph (e.g., missing vertices and/or edges; missing conversion steps to go from one unit of analysis to another, etc.) may help us to highlight areas where either additional research, and/or expert human input is needed.</p>
<section class="footnotes">
<hr />
<ol>
<li id="fn1"><p><a href="http://alun.math.ncsu.edu/wp-content/uploads/sites/2/2017/01/epidemic_notes.pdf" class="uri">http://alun.math.ncsu.edu/wp-content/uploads/sites/2/2017/01/epidemic_notes.pdf</a><a href="#fnref1" class="footnote-back"></a></p></li>
<li id="fn2"><p><a href="http://white.ucc.asn.au/DataDeps.jl/latest/z20-for-pkg-devs.html#Registering-a-DataDep-1" class="uri">http://white.ucc.asn.au/DataDeps.jl/latest/z20-for-pkg-devs.html#Registering-a-DataDep-1</a><a href="#fnref2" class="footnote-back"></a></p></li>
<li id="fn3"><p>new model nodes must first be ingested into the system in order to be made available to users.<a href="#fnref3" class="footnote-back"></a></p></li>
</ol>
</section>
</body>
</html>

36
doc/make.jl

@ -0,0 +1,36 @@ @@ -0,0 +1,36 @@
using Documenter
@info "Loading module"
using Semantics
@info "Makeing docs"
makedocs(
modules = [Semantics],
format = :html,
sitename = "Semantics",
doctest = false,
pages = Any[
"Getting Started" => "index.md",
"Library Reference" => "library.md",
"Slides" => "slides.md",
# "Model Types" => "types.md",
# # "Reading / Writing Models" => "persistence.md",
# # "Plotting" => "plotting.md",
# # "Parallel Algorithms" => "parallel.md",
# "Contributing" => "contributing.md",
# "Developer Notes" => "developing.md",
# "License Information" => "license.md",
# "Citing SemanticModels" => "citing.md"
]
)
# # deploydocs(
# # deps = nothing,
# # make = nothing,
# # repo = "github.com/jpfairbanks/SemanticModels.jl.git",
# # target = "build",
# # julia = "stable",
# # osname = "linux"
# # )
# # rm(normpath(@__FILE__, "../src/contributing.md"))
# # rm(normpath(@__FILE__, "../src/license.md"))

26
doc/server.go

@ -0,0 +1,26 @@ @@ -0,0 +1,26 @@
/*
Serve is a very simple static file server in go
Usage:
-p="8100": port to serve on
-d=".": the directory of static files to host
Navigating to http://localhost:8100 will display the index.html or directory
listing file.
*/
package main
import (
"flag"
"log"
"net/http"
)
func main() {
port := flag.String("p", "8100", "port to serve on")
directory := flag.String("d", ".", "the directory of static file to host")
flag.Parse()
http.Handle("/", http.FileServer(http.Dir(*directory)))
log.Printf("Serving %s on HTTP port: %s\n", *directory, *port)
log.Fatal(http.ListenAndServe(":"+*port, nil))
}

51
doc/src/img/aske.dot

@ -0,0 +1,51 @@ @@ -0,0 +1,51 @@
digraph H{
rankdir="LR"
#subgraph cluster_G {
node[style=filled]
label ="Pipeline of extraction"
rankdir="LR"
subgraph cluster_1{
node[color="#5DADE2"]
label="Extraction"
a,b,c,d
a->b [label="text"]
a->c [label="data"]
a->d [label="formulas"]
#e [label="Find possible\nmodels"]
}
subgraph cluster_2 {
node[color=lightblue]
h [label="knowledge\ngraph"]
label="Model Creation"
constraint=false
h -> f
e [label="model\nconstraints"]
f [label="likely\nmodel"]
{rank=same h e}
e -> f
}
subgraph cluster_3{
node[color="#48C9B0"]
f -> g
g -> i
label="Solving"
}
{b,c,d} -> e
h -> i
h -> e
a -> h
a [label="documents"]
b [label="rules"]
c [label="parameters"]
d [label="functions"]
g [label="code\ngeneration"]
i [label="solver"]
#›}
}

0
doc/covar_fig1.jpg → doc/src/img/covar_fig1.jpg

Before

Width:  |  Height:  |  Size: 147 KiB

After

Width:  |  Height:  |  Size: 147 KiB

50
doc/src/img/election_metamodel.dot

@ -0,0 +1,50 @@ @@ -0,0 +1,50 @@
digraph G {
rankdir="LR"
subgraph cluster_meta{
label="Election metamodel";
subgraph cluster_0 {
# style=filled;
color=lightgrey;
node [style=filled,shape=Square];
a0 -> a2
a1 -> a2
label = "Belief\nModel";
color=black
{rank=same a0 a1}
}
subgraph cluster_1 {
node [style=filled, shape=Square];
Finance -> ges
Income-> ges
#Wealth -> ges
{rank=same Finance Income}
label = "Economic\nModel";
color=darkgreen
}
subgraph cluster_2 {
node [style=filled,color=lightblue,shape=Square];
a2->Polls [style=dashed];
Polls -> elec;
ges -> Fundamentals [style=dashed];
Fundamentals->elec;
label = "Election Model";
#style=filled;
color=black;
}
Finance [label="Stocks"];
Income [label="Wages"]
ges [label="Economic\nmood"];
elec -> end [style=dashed];
elec [label="Election\n winner"];
a0 [shape=Square; label="Campaigns"];
a1 [shape=Square; label="Initial\nBeliefs"];
a2 [label="Beliefs" ];
#start [shape=Mdiamond];
end [shape=Msquare, label="Policy\noutcome"];
}
label="A simple model of election forcasting";
}

54
doc/src/img/extraction.dot

@ -0,0 +1,54 @@ @@ -0,0 +1,54 @@
digraph H{
rankdir="LR"
node[shape=rectangle]
#subgraph cluster_G {
node[style=filled]
#label ="Pipeline of extraction"
rankdir="LR"
subgraph cluster_1{
node[color="#5DADE2"]
label="Extraction"
a,b,c,d
a->b [label="text"]
a->c [label="data"]
a->d [label="formulas"]
#e [label="Find possible\nmodels"]
}
subgraph cluster_2 {
node[color=lightblue]
h [label="knowledge\ngraph"]
label="Model Creation"
constraint=false
#h -> f
e [label="model\nconstraints"]
f [label="likely\nmodel"]
{rank=same h e}
e -> f
}
subgraph cluster_3{
node[color="#48C9B0"]
i -> g
f -> g
label="Solving"
{rank = same; i; g;}
}
{b,c,d} -> e
h -> i
h -> e
a -> h
a [shape=record,label="literature|code|documetation"]
b [label="rules"]
c [label="parameters"]
d [label="functions"]
g [label="code\ngeneration"]
i [label="solver"]
#›}
}

117
doc/src/img/flu_pipeline.dot

@ -0,0 +1,117 @@ @@ -0,0 +1,117 @@
digraph H {
graph [bb="0,0,925.64,284",
label="Pipeline of extraction",
lheight=0.19,
lp="462.82,11",
lwidth=1.67,
rankdir=RL
];
node [label="\N",
style=filled, shape=rectangle
];
{
i [color="#5DADE2",
height=0.5,
label="initial u",
pos="285.5,228",
width=0.75825];
p [color="#5DADE2",
height=0.5,
label=parameters,
pos="285.5,174",
width=1.3582];
f [color="#5DADE2",
height=0.5,
label="function\n u''(t) = -ku(t)",
pos="285.5,120",
width=1.195];
fi [
height=0.5,
label="function\ndu[1] = u[2]\ndu[2] = -p[1] * u[1]",
pos="285.5,120",
width=1.195];
ii [label="27.0C\n0C/s"]
pi [label="1/s²"]
a [label="SpringModel"]
t [label="time\ndomain", color="#5DADE2"]
ti [label="(0, 12𝜋)"]
u [label=units, color="#5DADE2"]
ui [label="C\nC/s"]
}
subgraph cluster_1 {
graph [bb="8,94,342.4,276",
label=Seasons,
lheight=0.19,
lp="175.2,265",
lwidth=0.81
];
node [color="#5DADE2"];
a -> p
a -> f [label=formulas,
lp="174.73,143",
pos="e,244.33,125.21 110.5,142.16 146.59,137.59 196.77,131.24 234.15,126.5"];
f -> fi [label=implemented,
lp="174.73,143"];
p -> pi [label=implemented]
p -> i [label=implemented]
i -> ii
a -> t -> ti
a -> ui
ui -> u
}
subgraph cluster_2 {
graph [bb="363.4,30,636.02,240",
constraint=false,
label="Disease Spread",
lheight=0.19,
lp="499.71,229",
lwidth=1.23
];
sir [label="SIRProblem",];
node [color="#5DADE2"];
{
graph [rank=same];
gamma [label="𝛾 = 20 person/(day𝘅C)"]
sirt [label="time\ndomain"];
pop [label=population];
}
sirpopi [label="10 Kperson"]
sirti [label="(0,200 days)"];
sirp [label="(40/day,𝛾)"];
sir -> pop
pop -> sirpopi
sir -> sirt
sirt -> sirti
sir -> sirp
a -> gamma
gamma->sirp
}
subgraph cluster_3 {
graph [bb="657.02,83,917.64,171",
label="Public Health Response",
lheight=0.19,
lp="787.33,160",
lwidth=0.61
];
person
z [label="doses"]
gu [label="dimensionless"]
regp [label="RegressionProblem"]
node [color="#5DADE2"];
v [label="vaccines"]
g [label="coverage"]
s [label="I"]
v -> r
// c -> d
v -> z
// r [label="c = β₁v +β₂gI"]
regp -> r
r [label="v=β₂gI"]
r -> v
r -> g
r -> s
s -> person
g -> gu
sir -> s
}
}

0
doc/flu_pipeline.pdf → doc/src/img/flu_pipeline.pdf

29
doc/src/img/formulas_sir.tex

@ -0,0 +1,29 @@ @@ -0,0 +1,29 @@
\documentclass{minimal}%\documentclass{article}
%\pagestyle{empty}
%\headheight=0pt
%\headsep=0pt
\usepackage{amsmath}
% \usepackage{pstricks}
% \paperwidth=72.27pt
% \paperheight=72.27pt
% \topmargin=-72.27pt
% \oddsidemargin=-72.27pt
% \parindent=0sp
% \special{papersize=\the\paperwidth,\the\paperheight}
\begin{document}
% \begin{pspicture}(\paperwidth,\paperheight)
% \psframe[linecolor=red](\paperwidth,\paperheight)
% \end{pspicture}
\[
\frac{dS}{dt} = -\frac{\beta S I}{N}\\
\frac{dS}{dt} = \frac{\beta S I}{N} - \gamma N\\
\frac{dS}{dt} = \gamma N
\]
\begin{align}
\frac{dS}{dt} &= -\frac{\beta S I}{N}\\
\frac{dS}{dt} &= \hphantom{-} \frac{\beta S I}{N} - \gamma N\\
\frac{dS}{dt} &= \gamma N
\end{align}
\end{document}

56
doc/src/img/metamodel.dot

@ -0,0 +1,56 @@ @@ -0,0 +1,56 @@
digraph G {
rankdir="LR"
subgraph cluster_meta{
label="Healthcare metamodel";
subgraph cluster_0 {
# style=filled;
color=lightgrey;
node [style=filled,shape=record];
a0 -> a2;
a1 -> a2;
a3 -> a2;
a0 [label="Information\nCampaigns", color="lightblue"];
a1 [label="Initial\nBeliefs"];
a3 [label="Social\nNetwork"];
a2 [label="Beliefs" ];
label = "Hand Hygiene\nBelief Model";
color=black
#{rank=same}
labelloc=t
}
subgraph cluster_1 {
node [style=filled, shape=box];
a2-> Disease [style="dashed"]
Disease -> Infection
Antibiotics-> Infection
#{rank=same Disease a0}
Disease [label="Disease\n Prevalence"]
Antibiotics [color="lightblue"]
label = "Disease\nModel";
color="#8b0000";
labelloc=t
}
subgraph cluster_2 {
node [style=filled, shape=box];
Infection -> end [style="dashed"]
MedRecord-> end
MedRecord [label="Medical\nRecords"]
Complications -> end
{rank=same}
label = "Health Outcome\nModel";
color=darkgreen
labelloc=b
}
subgraph cluster_3 {
node [style=filled,color=lightblue,shape=box];
label = "Cost Model";
#style=filled;
color=black;
labelloc=t
}
end [shape=Msquare, color="#5a87d7", label="Expected\nHealthcare\nCosts"];
}
label="A metamodel for predicting healthcare costs";
}

112
doc/src/img/pipeline.dot

@ -0,0 +1,112 @@ @@ -0,0 +1,112 @@
digraph H {
graph [bb="0,0,925.64,284",
label="Pipeline of extraction",
lheight=0.19,
lp="462.82,11",
lwidth=1.67,
rankdir=LR
];
node [label="\N",
style=filled
];
{
b [color="#5DADE2",
height=0.5,
label=rules,
pos="285.5,228",
width=0.75825];
c [color="#5DADE2",
height=0.5,
label=parameters,
pos="285.5,174",
width=1.3582];
d [color="#5DADE2",
height=0.5,
label=functions,
pos="285.5,120",
width=1.195];
}
subgraph cluster_1 {
graph [bb="8,94,342.4,276",
label=Extraction,
lheight=0.19,
lp="175.2,265",
lwidth=0.81
];
node [color="#5DADE2"];
a [color="#5DADE2",
height=0.5,
label=documents,
pos="64.421,148",
width=1.345];
b;
a -> b [label=text,
lp="174.73,205",
pos="e,260.9,219.77 97.583,161.12 113.36,167.29 132.53,174.66 149.84,181 184.24,193.6 223.85,207.22 251.35,216.54"];
c;
a -> c [label=data,
lp="174.73,170",
pos="e,238.84,168.51 110.77,153.45 145.14,157.49 192.19,163.03 228.81,167.33"];
d;
a -> d [label=formulas,
lp="174.73,143",
pos="e,244.33,125.21 110.5,142.16 146.59,137.59 196.77,131.24 234.15,126.5"];
}
subgraph cluster_2 {
graph [bb="363.4,30,636.02,240",
constraint=false,
label="Model Creation",
lheight=0.19,
lp="499.71,229",
lwidth=1.23
];
node [color=lightblue];
{
graph [rank=same];
h [color=lightblue,
height=0.70711,
label="knowledge\ngraph",
pos="426.15,63",
width=1.5208];
e [color=lightblue,
height=0.70711,
label="model\nconstraints",
pos="426.15,185",
width=1.5057];
}
f [color=lightblue,
height=0.70711,
label="likely\nmodel",
pos="591.96,116",
width=1.0016];
h -> f [pos="e,559.06,105.49 471.46,77.482 495.7,85.233 525.39,94.723 549.24,102.34"];
h -> e [pos="e,426.15,159.43 426.15,88.734 426.15,108.89 426.15,129.05 426.15,149.21"];
e -> f [pos="e,560.8,128.96 467.14,167.94 492.82,157.26 525.78,143.54 551.38,132.89"];
}
subgraph cluster_3 {
graph [bb="657.02,83,917.64,171",
label=Solving,
lheight=0.19,
lp="787.33,160",
lwidth=0.61
];
node [color="#48C9B0"];
g [color="#48C9B0",
height=0.70711,
label="code\ngeneration",
pos="718.11,116",
width=1.4748];
i [color="#48C9B0",
height=0.5,
label=solver,
pos="877.92,112",
width=0.88108];
g -> i [pos="e,845.89,112.8 771.32,114.67 792.26,114.14 815.95,113.55 835.68,113.06"];
}
a -> h [pos="e,373.03,69.207 98.94,135.26 133.4,122.95 188.06,104.52 236.61,93 278.24,83.118 325.8,75.6 362.87,70.563"];
b -> e [pos="e,380.56,198.94 310.33,220.41 327.07,215.29 349.85,208.33 370.78,201.93"];
c -> e [pos="e,372.57,180.81 333.51,177.75 342.85,178.48 352.76,179.26 362.51,180.02"];
d -> e [pos="e,387.44,167.11 314.65,133.47 333.01,141.95 357.16,153.11 378.28,162.88"];
h -> i [pos="e,865,95.411 481.27,63 493.5,63 506.39,63 518.4,63 518.4,63 518.4,63 808.7,63 827.72,63 845.17,75.478 857.79,87.841"];
f -> g [pos="e,664.55,116 628.11,116 636.36,116 645.37,116 654.39,116"];
}

2
doc/schema.dot → doc/src/img/schema.dot

@ -1,4 +1,4 @@ @@ -1,4 +1,4 @@
\neatograph[scale=0.45]{schema}{
digraph G{
rankdir = "LR"
node [shape=box]
subgraph cluster_0 {

0
doc/schema.pdf → doc/src/img/schema.pdf

60
doc/main.md → doc/src/index.md

@ -1,30 +1,18 @@ @@ -1,30 +1,18 @@
---
abstract: |
This deliverable provides a high-level overview of the open-source
epidemiological modeling software packages that we have reviewed, and
outlines our intended approach for extracting information from
scientific papers that cite one or more of these packages. Our
extraction efforts are directed toward the construction of a knowledge
graph that we can traverse to reason about how to best map a set of
known, unitful inputs to a set of unknown, unitful outputs via parameter
modification, hyperparamter modification, and/or sequential chaining of
models present in the knowledge graph. Our overarching purpose in this
work is to reduce the cost of conducting incremental scientific
research, and facilitate communication and knowledge integration across
different research domains.
author:
- |
Christine Herlihy and Scott Appling and Erica Briscoe and James
Fairbanks
bibliography:
- 'refs.bib'
date: 'Dec 1, 2018'
title: |
Automatic Scientific Knowledge Extraction: Architecture, Approaches, and
Techniques
---
\maketitle{}
# Automatic Scientific Knowledge Extraction
Architecture, Approaches, and Techniques
We provide a high-level overview of the open-source
epidemiological modeling software packages that we have reviewed, and
outlines our intended approach for extracting information from
scientific papers that cite one or more of these packages. Our
extraction efforts are directed toward the construction of a knowledge
graph that we can traverse to reason about how to best map a set of
known, unitful inputs to a set of unknown, unitful outputs via parameter
modification, hyperparamter modification, and/or sequential chaining of
models present in the knowledge graph. Our overarching purpose in this
work is to reduce the cost of conducting incremental scientific
research, and facilitate communication and knowledge integration across
different research domains.
Introduction
============
@ -70,7 +58,6 @@ large scale. @@ -70,7 +58,6 @@ large scale.
\setcounter{tocdepth}{2}
\newpage
\tableofcontents
Scientific Domain and Relevant Papers
=====================================
@ -368,14 +355,13 @@ and provide scripts for running the experiments they describe. We @@ -368,14 +355,13 @@ and provide scripts for running the experiments they describe. We
believe this is an example of the kind of material we will be able to
perform useful information extractions on to inform the development of
metamodels. Figure
[\[fig:covar\_paper1\]](#fig:covar_paper1){reference-type="ref"
[\[fig:img/covar\_paper1\]](#fig:covar_paper1){reference-type="ref"
reference="fig:covar_paper1"} is an example of script code from
[@doi:10.1111/oik.04527]:
\centering
![Example script excerpt associated with [@doi:10.1111/oik.04527]
setting parameters for use in an ERGM model implemented by EpiModels
library.](covar_fig1.jpg){width="70%"}
library.](img/covar_fig1.jpg){width="70%"}
[\[fig:covar\_paper1\]]{#fig:covar_paper1 label="fig:covar_paper1"}
@ -385,7 +371,6 @@ reference="table:info_extract"} is a non-exhaustive list of the kinds of @@ -385,7 +371,6 @@ reference="table:info_extract"} is a non-exhaustive list of the kinds of
information extractions we are currently planning and the purposes they
serve in supporting later steps:
\centering
Extraction Type Description Sources
------------------ ----------------------------------------------------------------------------------- -----------------
Code References Creation and selection of metamodels to extend or utilize depending on user goals Papers, Scripts
@ -465,9 +450,8 @@ which will connect to concepts. When papers are connected to other @@ -465,9 +450,8 @@ which will connect to concepts. When papers are connected to other
papers, they are connected indirectly (e.g., via other vertices), except
for edges that represent citations directly between papers.
\centering
![An example of the knowledge graph illustrating the nature of the
schema.[]{label="fig:schema."}](schema.pdf){width="\textwidth"}
schema.[]{label="fig:schema."}](img/schema.dot.svg)
It is an open question for this research whether the knowledge graph
should contain models with the parameters bound to values, or the
@ -585,8 +569,8 @@ possible description is to represent the DAG in a human-friendly way, @@ -585,8 +569,8 @@ possible description is to represent the DAG in a human-friendly way,
such as in Figure [\[fig:flu\]](#fig:flu){reference-type="ref"
reference="fig:flu"}.
\centering
![An example pipeline and knowledge graph elements for a flu response model.[]{label="fig:flu"}](flu_pipeline.pdf){width="\textwidth"}
![An example pipeline and knowledge graph elements for a flu response
model.[]{label="fig:flu"}](img/flu_pipeline.dot.svg)
Scientific Workflows (Pipelines)
--------------------------------
@ -651,7 +635,7 @@ When assembling a metamodel, it is important to eliminate possible @@ -651,7 +635,7 @@ When assembling a metamodel, it is important to eliminate possible
combinations of models that are scientifically or logically invalid. One
type of constraint is provided by units and dimensional analysis. Our
flu example pipeline uses
[https://github.com/ajkeller34/Unitful.jl](Unitful.jl) to represent the
[Unitful.jl](https://github.com/ajkeller34/Unitful.jl) to represent the
quantities in the models including $C,s,d,person$ for Celsius, second,
day, and person. While $C,s,d$ are SI defined units that come with
Unitful.jl, person is a user defined type that was created for this
@ -876,8 +860,6 @@ vertices and/or edges; missing conversion steps to go from one unit of @@ -876,8 +860,6 @@ vertices and/or edges; missing conversion steps to go from one unit of
analysis to another, etc.) may help us to highlight areas where either
additional research, and/or expert human input is needed.
\printbibliography
[^1]: <http://alun.math.ncsu.edu/wp-content/uploads/sites/2/2017/01/epidemic_notes.pdf>
[^2]: <http://white.ucc.asn.au/DataDeps.jl/latest/z20-for-pkg-devs.html#Registering-a-DataDep-1>

47
doc/src/slides.md

@ -0,0 +1,47 @@ @@ -0,0 +1,47 @@
# SemanticModels.jl
computational representations of model semantics with knowledge graphs for
metamodel reasoning.
## Goals
1. Extract a knowledge graph from Scientific Artifacts (code, papers, datasets)
2. Represent scientific models in a high level way, (code as data)
3. Build metamodels by combining models in hierarchical expressions using reasoning over KG (1).
## Running Example: Influenza
Modeling the cost of treating a flu season taking into account weather effects.
1. Seasonal temperature is a dynamical system
2. Flu infectiousness γ is a function of temperature
## Running Example: Modeling types
Modeling the cost of treating a flu season taking into account weather effects.
1. Seasonal temperature is approximated by 2nd order linear ODE
2. Flu cases is an SIR model 1st oder nonlinear ode
3. Mitigation cost is Linear Regression on vaccines and cases
## Scientific Domain
We focus on Susceptible Infected Recovered model of epidemiology.
1. Precise, concise mathematical formulation
2. Diverse class of models, ODE vs Agent based, determinstic vs stochastic
3. FOSS implementations are available in all three Scientific programming languages
## Knowledge Graph
Picture of KG sample
## Knowledge Graph Schema
Picture of KG schema
## Knowledge Graph Schema
Picture of Flu example
## Conclusions

25
src/Semantics.jl

@ -1,3 +1,8 @@ @@ -1,3 +1,8 @@
""" SemanticModels
provides the AbstractModel type and constructors for building hierarchical model representations.
"""
module Semantics
using Unitful
@ -50,7 +55,12 @@ function __init__() @@ -50,7 +55,12 @@ function __init__()
Unitful.register(Semantics)
end
""" BasicSIR
defines a representation of Susceptible Infected Recovered models using Unitful
numbers and expressions.
"""
function BasicSIR()
β = NumParameter(, u"person/s", TransitionRate)
γ = NumParameter(, u"person/s", TransitionRate)
@ -72,12 +82,25 @@ end @@ -72,12 +82,25 @@ end
include("diffeq.jl")
include("regression.jl")
""" CombinedModel
represents a model that has a fixed set of dependencies.
It is the basic building block of a model DAG.
"""
mutable struct CombinedModel{F,S} <: Model
deps::F
target::S
end
""" solve(m::AbstractModel)
executes the solving of a model for models with dependencies, those deps are
executed first and then the node is solved. This is the function that evaluates
the model DAG.
"""
function solve(m::CombinedModel)
return solve(m.target(m, solve.(m.deps)))
end
end
end

9
src/diffeq.jl

@ -11,6 +11,10 @@ export parameters, domain, initial_conditions, flux, odeproblem, accelerate!, @@ -11,6 +11,10 @@ export parameters, domain, initial_conditions, flux, odeproblem, accelerate!,
# dy = -c*y + d*x*y
# end a b c d
""" hookeslaw
the ODE representation of a spring. The solutions are periodic functions.
"""
function hookeslaw(du, u, p, t)
# u[1] = p
# u[2] = v
@ -18,6 +22,11 @@ function hookeslaw(du, u, p, t) @@ -18,6 +22,11 @@ function hookeslaw(du, u, p, t)
du[2] = -p[1] * u[1]
end
""" SpringModel
represents the second order linear ODE goverened by [hookeslaw](@ref).
"""
mutable struct SpringModel{F,D,U}
frequency::F
domain::D

Loading…
Cancel
Save