Category Archives: Uncategorized

On Symbolic Execution and more

Wait, what?

It is exactly what it sounds like.

Symbolic execution implies assumption of abstract symbolic values for variables to determine flow behaviour of the program.

This concept was put forth in a paper titled “Symbolic Execution and Program testing” by James C. King. Rather than using the data objects provided by the programming language for storing values, we use arbitrary symbolic values for computation. You can see that the normal computation is a special case of symbolic execution wherein we “concretize” the values. Let us take a look at some assumptions made by the author.

Assumptions

  1. Computer programs deal only with integers.
    This assumption is fairly valid and is almost harmless in many analyses. Although a lot of programs work with float and double values, to reduce the search space, we will assume integral values only. We will also ignore integer overflows(in the case of 32/64 bit integers in C, for example).

  2. The execution tree resulting from many programs is inifinite.

  3. Symbolic execution of IF statements requires theorem proving which is mechanically possible.
    This is a very critical assumption and forms the basis of all analysis performed on the execution tree.

Why symbolic execution?

It is the “workhorse” of modern software testing. As with my previous blog post, Symbolic execution saw a rise in interest and research due to increase in CPU speed and the ability to execute things parallely. We will look at how exactly this is useful. Also, it is one of the very few ideas which has made it to the industry and practice, breaking out of the academic bubble`.

How does it compare to the current techniques of static and dynamic analyses?

Symbolic Execution provides a some advantages over both these approaches. Let us look at them:

In comparison to (dynamic)testing it allows us to explore a more broader search space. We can make sure that the program is tested on almost all possible inputs before it is shipped.

Static Analysis has seen a lot of growth in the past few years. We have have better approaches for applying techniques on binaries and the source as well. But one of the fundamental issues with applying such algorithms is that they can point it out to you that there is a possibility of an error, but can never predict/output the input which can trigger the exploit. Symbolic Execution scores a +1 in that regard.

I think the above two reasons are enough to peek under the hood?

Control Flow graph and understanding the semantics of the flow

As wikipedia defines it,

A CFG is a representation using graph notation, of all the paths that might be traversed through a program during it’s execution.

A symbolic execution engine works on this exact representation of the program. Instead of variables having numeric(concrete) values, we choose one of the infinite abstract values and give it to the variable. Let us check out an example:

int main() {
int i;
scanf("%d", i);
if (i==5) {
  printf("Error.");
} else {
  printf("Yay.");
}
return 0;
}

Now we can draw a CFG of the above program as follows:

From the above graph we understand that at the IF conditional statement the graph branches in two. This means that depending on the conditions imposed on the various variables in the program in the IF statement, the program is going to behave differently. In this particular case, if the value of the variable i is 5, the control flow is going to be leading to the true branch and to the false branch otherwise. The rules for evaluation of these statements must be extended to include symbolic values as well.

Understanding the generation of the path equation

A program state is usually defined by the value of its variables(global and local, ofcourse) during normal executionl. But King, in his paper describes the generation of a path equation which has to be included in the program state. A path expression or a path condition(pc) is always initialized as true. As the program follows a particular execution path, more and more conditions are conjoined to the pc. Each statement thus contributes to the path expression(unless it’s a trivial assignment).

An interesting situation arises for the IF condition. A trivial case for this would be if the condition evaluates to one of the two boolean variables. If it does, we continue with that branch and do not consider the other one. But in case we cannot determine at symbolic time what the condition will evaluate to, then we need take a look at both the branches. This happens due to the involvement of the symbolic variable in the equation.

We now fork the execution into two branches. One, where the true branch was taken, and another where it was not. This will basically lead us to generate two expressions. One will be of the form: pc AND (i==5) and another pc AND (i NOT= 5). Now symbolic execution for the rest of the program analysis continues with both these states in memory.

Let us see the generation of the path expression of the above C program:

  1. Statements 1, 2: A fresh new symbolic variable is assigned to i. Let us call it alpha. Current pc: true
  2. Statement 3 : An if condition is encountered. Since it cannot be evaluated, we generate two conditions. pc: true AND (alpha == 5) | true AND (alpha NOT= 5).
  3. Statement 4 : Since this is the last statement in the program, we have already evaluated all the branches, and hence the expression here would be the logical OR of both the expressions in memory.

Thus, after evaluating the entire (execution) tree in this manner, we will end up with certain conditions on the variables for each leaf node. This is a completely exhaustive search mechanism since we have assumed the values to the totally abstract and not provided any information on what they could be(apart from being integers).

Below is the tree representation:

This is a very simple example but it gets the idea through. Although things get a lot more messier with assignments and goto statements, the concept remains the same. The material above is enough to understand basic literature on the topic.

Theorem provers and SMT solvers

Well, now what? We have the path expression of the entire program. Assuming we want to generate test cases for the ERROR branch, we need to solve the boolean expression. Enter, SMT Solvers! SMT(Satisfiability Modulo Theory) Solvers try and solve decision problems. Since, throughout the program we are applying constraints on each variable, these are also known as constraint solvers. Microsoft’s z3 is one of the most used(in production) theorem prover freely available online.

They take in an expression containing symbolic variables and constraints which are imposed upon them. They then try and give us a result depending on the satisfiability of a problem. Sometimes it may give us our result in seconds whereas on other times it may tell us that the problem is unsolvable. But also, solvers fail to deduce either of the facts since it is a NP Complete problem. The algorithms used for these solvers internally, is a matter of a completely different blog post altogether but they basically try and combine multiple expressions from the given path condition and see if they can find a valid solution. Since the search space is very high(unless otherwise bound in a constraint), it takes a lot of time. This also, to certain extent depends on the number of conditions but it’s performance decreases as the number of expressions grow.

Are we going to have a demo or what?

In Clojure? Sure! We are going to try and solve a Project Euler problem 9. The problem statement is as follows:

A Pythagorean triplet is a set of three natural numbers, a < b < c, for which,
a^2 + b^2 = c^2
For example, 32 + 42 = 9 + 16 = 25 = 52.
There exists exactly one Pythagorean triplet for which a + b + c = 1000. Find the product abc.

Pretty easy to just iterate over all the possible values over 1000. But we will try and do something new! \o/

Okay, now let’s say that a, b and c are symbolic variables. We will use the clojure.core.logic library for this.

(ns clj.p9
  ;; conflicting operator
  (:refer-clojure :exclude [==])
  (:require [clojure.core.logic :refer :all]
            [clojure.core.fd    :as fd]))

(defn solution
  ;; Let us say we take as input the sum of the three pythagorean variables and input the result.
  [sum]
  ;; We will have the result in a vec as [a b c].
  ;; Since we want the product a*b*c, let's just multiply them together.
  ;; We use the function first since the two results provided are:
  ;; [a b c] and [b a c] with c being the hypotenuse. Hence we choose one of them.
  (reduce * (first
  ;; Here we go! `run*` implies that we want the computer to compute
  ;; all the possible values for the expression `q`.
  (run* [q]
  ;; Declare 3 fresh symbolic variables for a, b and c.
  (fresh [a b c]
  ;; The solution should be a vec of values of the variables a, b and c.
  (== q [a b c])
  ;; Now we know that all that all the number have to be distinct.
  ;; We hence use the fd/distinct to add that constraint.
  (fd/distinct [a b c])
  ;; Another contraint we can derive is that since the sum of two sides is always &gt; than the third,
  ;; each side has to be lesser than half of the `sum` value.
  ;; This can reduce our search space by quite a lot. fd/in provides us an option to specify the range.
  ;; (fd/interval [start] [end]) is the actual numeric range.
  (fd/in a b c (fd/interval 1 (/ sum 2)))
  ;; We now add the given constraints on the variables.
  (fd/eq
    ;; sum = a + b + c
    (= sum (+ a b c))
    ;; c^2 = a^2 + b^2 (since they are pythagorean triplets)
    (= (* c c) (+ (* a a) (* b b)))))))))

EZ PZ. Let’s run this!

(time (soln 1000))
"Elapsed time: 2345.989466 msecs"
31875000

There we have it! Solved in 2.3 seconds. Although this was fairly simple, but we learnt a lot on the way. core.logic is not as highly optimized as z3 but is fairly fast. It is made for a relational algebra which is a superset of symbolic analysis. z3 has a nice Python API to interact with it!

Issues involved

Current research trends include the inclusion of analysis of source and raw binaries as well. Program testing can be approached in an entirely new way if the inputs to methods(maybe?) are considered to be symbolic and we then fuzz test the over the entire search space. It is although very important to realize that there are a lot of problems which can come up during practical implementations:

  1. Path explosion: If there are a lot of branching statements in the program, there will be generation of a new path expression at each branch hence creating a new instance of the symbolic runtime increasing the number of paths. This is a very serious problem and requires smart inputs from the user to prune certain branches off.

  2. Environment interactions and system calls: A symbolic engine will find it very tough to interact with the underlying OS or the runtime environment. This can hinder a lot of core systems testing. The program may take decision based on the return value of a particular read statement or write to a particular file and then taking decisions. There is still ongoing research to find implement a virtual model of the underlying stack for the engine to interact with.

A thought experiment: Symbolic Execution in Healthcare?

Medical systems need to completely fool proof and verified before they are sent to production. Symbolic execution and analysis can be absolutely critical in such an analysis. Like normal programs, even though certain inputs are completely pointless(for example providing a negative number to a square root calculator) it is important that we implement correct error handling.

Although the above is practical, it would be more interesting to apply this logic to the human body directly which is… let’s just say it’s a though experiment. Some of my thoughts related to the ideas below stem from the discussion here. Below are some assumptions I make:

  1. Since the human body is made up of a finite number of atoms, at any point in time there is a set of values which describe the entire body state(Assuming we manage to deconstruct the brain).

  2. Although huge, we have a model of the human body which can be provided to a computer to generate an approximate system which will emulate the body internals and reactions to stimuli. Although far-fetched, we are slowly inching toward such a goal.

  3. The computer we imagine is fast enough to perform calculations on such a scale.

Imagine medicine in such a scenario. Assume a medicinal drug to be a symbolic variable and the program to be the human body itself. The medicine can be thought to be an input to the body and hence the system should be able to track the breakdown of such a medicine in the body. The IF conditions in this case would represent exsiting patient conditions and his body state which would lead to different reactions in the body. Once we have all the leaf-nodes of a graph of such a system, the resulting information can be highly infomative in providing crucial healthcare and desigining of drugs, since we will have data regarding each breakdown of the chemicals. Drug design can be highly benefit from such a scenario!

The possibilities are endless. For now, let me get back to ValidIS.

References!

[1] Symbolic Execution and Program Testing, JC King.
[2] Lecture notes on Symbolic Execution, University of Maryland.

Solving the trivial problem of dimensional analysis in Clojure

Functional Programming

Functional Programming (or fp as it is often called) is a paradigm of programming which aims at describing what we want to do rather than how we want to do it. The entire universe of fp revolves around the concept of pure functions.

A pure function is a function where the return value is only determined by its input values, without observable side effects.

Imagine if you will, a conveyor belt which takes in raw material as input. There are a lot of tools which process the input as it goes along, eventually producing the desired result. You can consider the concept of functional programming to be similar to the above situation in a number of ways. The conveyor belt can be thought of as the program with each of the tools being a pure function(does not produce any external side effects, depends only on input). The only difference is that on the belt, the original resource is modified in place, whereas in the program, the data is worked upon and a new instance is passed on to the next function.

Although fp has been around for a while, due to the advent of multi-core systems, big data and distributed computing it is seeing a resurgence in terms of interest in the programming community. It is backed by very strong literature and there is a lot of ongoing research on the same.

Some of the principles that fp operates by are as follows:

  1. Immutable Data structures: This considerably reduces the resources we spend on memory allocation and storage. So, how do we program with such an impairment? Rather than passing data around, we can operate on the data and create a new instance. This is still more efficient than modifying it in place. It allows the compiler to make certain assumptions that are unsafe in an imperative language.

  2. Higher Order functions: We can consider them as functions which take in other functions as arguments or return them as results. The idea is derived from lambda calculus. One of the best examples of a HOF is the function map which takes in a function as an argument and maps it to the list of elements provided as the second argument. Since each function application is independent of another(assuming pure functions) there is a lot of scope for improving the approach by running the same program on a multi-core system. Each of the results can then be aggregated using a function such as reduce.

  3. Seperation of Functions and Values: To an OO person, the concept of encapsulation is very pivotal while programming. This allows the class to have an internal state which can be modified by calling class methods which operate on these values. In fp, both of these entities are clearly seperated.

  4. Statelessness: Since all the functions are pure they have no idea of what has happened or what is going to happen to the input it receives. It only cares about operating on the data and producing the output. This fact combined with immutability is extremely useful for unit testing since each function has a specified set of duties it has to perform and in doing so, it will not affect any external state/variables. The state of the program is kep local and methods(or functions) are completely pure in the fact that they do not hide any abstraction.

Clojure

Clojure is a dynamic, general purpose programming language, combining the approachability and interactive development of a scripting language with an efficient and robust infrastructure for multithreaded programming.

Rich Hickey is a genius. Clojure sits on top of the JVM, leverages Java’s type system and supports calling Java code from within. Clojure has a set of internal data structures such as maps, sets, vectors and lists. It also supports lazy sequences, immutability and persistent data structures. It also boasts of a typical Lisp syntax(which can be a little intimidating for the novice programmer).

Why am I doing this?

I was inspired by the solution to the problem as given by Peter Henderson in his book on Functional Programming and its applications. It is described in a very general manner and I thought it would be interesting to implement it in Clojure using a few of my own variations.

Why Clojure?

It’s awesome. It’s functional and thinking functionally gives you a way to come up with interesting solutions to common problems encountered during programming.

Cutting to the point

Dimensional Analysis is a very simple method to check if a physical equation is valid or not.

The dimensions of a quantity refer to the type of units that must be used in order to obtain the measure of that quantity. The type of units used are the 7 fundamental units which cannot be described with the help of anything else. We will be looking mainly at equations which involve quantities made up of Mass(M), Length(L) and Time(T).

We will assume that the equation is linear in all the physical variables and operators used are that of multiplication, division, addition and subtraction. Although this makes things a lot simpler than they already are, it will help us set up a basic framework for solving such a problem.

Understanding the problem

We will be trying to solve the following problem:

Assume that we have a set of variable quantities v1, v2, etc. each with a unique single alphabet symbol, and a given set of dimensions a1, a2, etc.

We will be trying to determine two things:
1. If the equation is dimensionally correct.
2. The dimension of the resulting quantity.

Let us assume a very straightforward model of an equation.

  1. We will consider that the equation is made up of Expression(s).
  2. Each Expression has a Left Hand Side(LHS) and a Right Hand Side(RHS) with an Operator acting upon them.
  3. The most fundamental quantity in the equation will be an Atom. An Atom can be a constant or a variable.
  4. Each quantity has a Dimension where we mention the exponents to the base quantities.

We will be using schema for data descriptions. schema provides us to functions validate the data and find out if it conforms to the given schema descriptions.

;; schema is required as 's'.

;; Dimension will contain the exponents to M(Mass), L(Length) and T(Time).
;; With s/Int, we specify that the values have to be integral.
(def Dimension
  {:M s/Int
   :L s/Int
   :T s/Int
   })

;; An Atom will contain a type specifying if it is a variable or a constant.
;; Let us also have the symbol of the variable, in the data definition.
;; Most importantly, the dimension of the quantity has to be mentioned as well.
;; As you will observe, the dimension is described in terms of the earler schema.
;; This will help in Atom validation.
(def Atom
  {:Type      (s/enum :variable :constant)
   :Symbol    s/Str
   :Dimension Dimension})

;; Let us also define the dimension for a constant quantity.
;; Since it is a constant it will not have exponents in any of the quantities.
(def Constant
  {:M 0
   :L 0
   :T 0})

;; Since we are assuming a simple case, let us say that the operations can be addition,
;; subtraction, division and multiplication.
;; The model is so designed that it can be easily extended to other operations.
(def Operator
  (s/enum :add :subtract :divide :multiply))

;; We will now describe the definition for Expression.
;; An expression can be described in a recursive manner.
;; The LHS and the RHS both can be Expressions or Atoms. schema provides a nice way to do this.
(def Expression
  {:LHS (s/conditional #(= nil (:Type %)) (s/recursive #'Expression) :else Atom)
   :RHS (s/conditional #(= nil (:Type %)) (s/recursive #'Expression) :else Atom)
   :Operator Operator})

;; We first check if the data in the LHS quantity has a Type field.
;; If it does, we know that it is an Atom, and we direct the checker to validate it for us.
;; Instead, if it does not, we can check for an Expression instead.
;; This can definitely be optimized but I thought this was one way to do it.

Coming up with functions for dimension calculations

Now, that we have the basic definitions sorted out, we can look forward to implementing some functions for checking dimensional correctness.

Assume a function which takes in an Expression or an Atom. We need to find out if the given expression is dimensionally correct. Let us divide the possible scenarios into cases such as:

  1. Input is an Atom. We can then return the dimension of the quantity.
  2. Input is a constant. We can then return Constant which is basically a dimensionless quantity.
  3. Input is an Expression with Operator :add. In this case, we need to check if the HS and the RHS have the same dimensions. If they do, we can return dimension of either. If they do not, we then return an undefined value (say UNDEF).
  4. Similar is the case for subtraction as well.
  5. Input is an Expression with Operator :multiply. We don’t need to check anything for this, but we return a sum of the dimensions of the two quantities involved.
  6. Input is an Expression with Operator :divide. Again, we return the difference between the dimensions of the numerator(LHS) and the denominator(RHS).
;; A generic undefined value. We can use this to convey that the equation is dimensionally incorrect.
(def UNDEF
  "INCORRECT")

;; Used to calculate the dimensions of the reulting quantity after the product of the two inputs.
;; If either of the quantities is UNDEF, this function returns UNDEF.
(defn product
  [dim1 dim2]
  (cond
    (= UNDEF dim1) UNDEF
    (= UNDEF dim2) UNDEF
    :else {:M (+ (:M dim1) (:M dim2))
           :L (+ (:L dim1) (:L dim2))
           :T (+ (:T dim1) (:T dim2))}))

; Used to calculate the dimensions of the reulting quantity after the divison of the two inputs.
;; If either of the quantities is UNDEF, this function returns UNDEF.
(;
(defn ratio
  [dim1 dim2]
  (cond
    (= UNDEF dim1) UNDEF
    (= UNDEF dim2) UNDEF
    :else {:M (- (:M dim1) (:M dim2))
           :L (- (:L dim1) (:L dim2))
           :T (- (:T dim1) (:T dim2))}))

;; Main logic function. Takes in an expression as an argument.
;; This function gives as output the dimension of the resulting quantity,
;; based on the Expression map provided to it.
;; If the equation is dimensionally incorrect, it will return UNDEF.
(defn dim
  [expr]
  (let [lhs           (:LHS expr)
        rhs           (:RHS expr)
        operator      (:Operator expr)
        expr-type     (:Type expr)]
    (cond
      (= expr-type :variable)  (:Dimension expr)
      (= expr-type :constant)  Constant
      (= operator :add)        (if (= (dim lhs) (dim rhs))
                                 (dim lhs)
                                 UNDEF)
      (= operator :subtract)   (if (= (dim lhs) (dim rhs))
                                 (dim lhs)
                                 UNDEF)
      (= operator :multiply)   (product (dim lhs) (dim rhs))
      (= operator :divide)     (ratio   (dim lhs) (dim rhs)))))

Setting up grammer for parsing the equation

Now that we have the general setup ready, we can go ahead and worry about actually taking user input and parsing it into the given Expression format. For this, we will be using the instaparse library. It allows you to specify your own grammer and builds a tree around the same.

Let us consider the simplest linear equation we will be looking to solve:

v = u + at

Newton’s first law of motion. Very simple to understand. But parsing it might be a pain. To make things slightly more tougher, let’s say that the quantities in the equation can have integral co-efficients as well. So it becomes:

v = Xu + Yat where X,Y are integers

We basically can use a parse tree which is used for arithmetic and make a few changes to it to include the physics-terms or as I would refer to them as pterms.

;; Introducing pterm as a new grammer rule.
;; pterm is basically a coefficient and multiple symbols multiplied together.
;; The output format specified here is :enlive which will help us in formatting the parsed data later.
(def physics
  (insta/parser
    "expr       = add-sub
    <add-sub>   = mul-div | add | sub
    add         = add-sub <'+'> mul-div
    sub         = add-sub <'-'> mul-div
    <mul-div>   = pterm | term | mul | div
    mul         = mul-div <'*'> term
    div         = mul-div <'/'> term
    term        = number | <'('> add-sub <')'>
    number      = #'[0-9]+'

Parsing the tree!

Alright! Let us dive directly into the code:

;; instaparse provides us with an awesome function which is known as transform.
;; transform allows us to traverse the tree from bottom-up and
;; act upon each node data and replace it by operating upon it.
;; Since :expr will the root node, it will eventually contain the entire expression
;; map, hence we provide it the identity function.
(def parse-tree->schema-def
  {:sym         replace-sym
   :coefficient replace-coefficient
   :add         replace-add
   :sub         replace-sub
   :div         replace-div
   :term        replace-term
   :pterm       replace-pterm
   :expr        identity})

;; Each of the replace-"operator" function can be defined as follows:
;; (Considering replace-add for example)
(defn replace-add
  [term1 term2]
  {:Operator :add
  :LHS term1
  :RHS term2})

;; This allows us to simplify the function logic to a great extent.
;; The parser will give us with the the :add node with two children,
;; each representing a side of the operation.
;; As mentioned before, this is true for all functions.

Now, we have the basic setup for conversion ready, but again, we are missing a few important functions:

  1. replace-coefficient: Since a co-efficient is going to be a constant, we can just create an Atom map for the constant and return it. The Symbol for it can be the value of the co-efficient.

  2. replace-term: As we can see from the parsing rules above, term will always contain integral value and hence we can just return the Constant defined earlier.

Now, two most important functions left are replace-pterm and replace-sym. Let us see how we can tackle those:

;; The pterm is basically a multiplication of the co-efficient term and the symbol terms.
;; Assuming we have both of them defined correctly:
(defn replace-pterm
  [coeff-term symbol-term]
  {:Operator :multiply
   :LHS coeff-term
   :RHS symbol-term})

;; Now, this is a bit complicated.
;; Now, we can have multiple symbols being multiplied to each other.
;; Our expression model is pretty simple in that we have an LHS and an RHS both of which
;; can either be an Atom or an Expression.
;; Suppose we have the term "w*h*a*t", we can consider it as
;; w * (h * (a * t))
;; This exact same algorithm is implemented below.
;; It takes in the first element of the symbol list and recurses on the rest.
;; If we have a single element, we call find-sym which we will be describing later,
;; but if we have 2 elements, then we know that we have reached the end of the list and
;; can create an multiplication Expression.
(defn replace-sym
  [& sym]
  (case (count sym)
  1 (find-sym (first sym))
  2 {:Operator :multiply
      :LHS      (find-sym (first sym))
      :RHS      (find-sym (last sym))
     }
  ;; default case
  {:Operator :multiply
   :LHS      (first sym)
   :RHS      (replace-sym (rest sym))}))

All well and good. The only remaining piece in the puzzle, is find-sym. This is basically a lookup function into an Assoc-list. Henderson defines a map/list as an Assoc-list if we can look up the symbol dimensions into it. Easy.

;; Defines the variable 'v' as an Atom.
(def Assoc-List
  {:v {:Type :variable
       :Symbol "v"
       :Dimension {:M 0
                   :L 1
                   :T -1}})

;; Simple lookup into the Assoc-List map.
;; Returns UNDEF if not found.
;; We can have the user populate this map as an earlier exercise.
(defn find-sym
  [sym]
  (let [sym-key   (keyword sym)
        sym-value (sym-key Assoc-List)]
    (if (not-empty sym-value)
      sym-value
      "UNDEF")))

Yay!

Lo and behold! We now have a working program which can dimensionally analyze equations for us. We can then define a function to aggregate all of the above into a single API exposing the required functionality. Maybe for a later time ;). Let us see it running:

user=> (def equation (physics "2u+3at"))
#'user/equation
user=> (insta/transform parse-tree->schema-def equation)
{:Operator :add, :LHS {:Operator :multiply, :LHS {:Type :constant, :Symbol 2, :Dimension {:M 0, :L 0, :T 0}}, :RHS {:Type :variable, :Symbol "u", :Dimension {:M 0, :L 1, :T -1}}}, :RHS {:Operator :multiply, :LHS {:Type :constant, :Symbol 3, :Dimension {:M 0, :L 0, :T 0}}, :RHS {:Operator :multiply, :LHS {:Type :variable, :Symbol "a", :Dimension {:M 0, :L 1, :T -2}}, :RHS {:Type :variable, :Symbol "t", :Dimension {:M 0, :L 0, :T 1}}}}}
user=> (def parsed-result (insta/transform parse-tree->schema-def equation))
#'user/parsed-result
user=> (s/validate Expression parsed-result)
# Returns successful!
user=> (dim parsed-result)
{:M 0, :L 1, :T -1}

P.S: I am a novice Clojure programmer. Do suggest improvements in my coding style and the above code itself. It is hosted on my Github.

References:
Peter Henderson’s book on Functional Programming Application and Implementation

Software Engineering in Health Systems Workshop 2016 – Where Engineering is Not Enough

This weekend I had the opportunity to attend the Software Engineering in Health Systems (SEHS) Workshop in Austin Texas. Through a series presentations, panels, and group discussions on a range of health care systems topics two main themes seemed to arose:

  1. Health care is an information intensive domain in which contemporary software engineering techniques are ill-equipped to operate. Systems engineering techniques seem to handle the problems more easily (though significant work is required).

  2. Though health care professionals and engineers seem to collaborate in these environments, they each fail to fully grasp the perspective of the other; this results in “half-way” solutions which, address an immediate need but are brittle and ultimately result in system instability.

The good news, in my opinion, is that the solutions to these problems do not require tremendous innovation in science or engineering. Instead, they require the correct allocation of human resources to the right problems.

In fact, allocation of resources (including humans) is one of the three main views (others are static and dynamic structures) from which you can consider the architecture of a software system; these problems are architectural in nature.

I think, the solution to (1) is to bring software engineers to tables in which systems decisions are made. Kevin Sullivan made this point beautifully in one of the workshop discussions. He indicated that software engineers are trained to think about systems issues already; if they can be part of the discussion about the socio-technical systems in health care then we will begin to see progress.

(2) is a matter of education; interestingly though, I think, the problem cannot be addressed by another classroom course in a standard undergraduate education. In my experience, the knowledge required to understand the complexity of the operations being carried out in both health care and software engineering requires a more tacit type of knowledge one can only acquire by activity being part of the environment. Joachim Reiss, VP of R&D at Siemens Health Care, made this point exquisitely clear in his keynote presentation: we need to get engineers into hospitals and health care providers into design labs.

Of course, neither of these problems are trivial, they involve complex social and political structures that make change, especially social change, tremendously difficult. This is simultaneously a great and maddening realization. It is nice to know that we are very capable effecting massive change in health care systems through social change, however it is clear from history (and recent events) that social change can be extremely difficult to enact on a large scale. Sadly, as much as I wish we could, we cannot rely strictly on engineering to get solve this problem for us.

Requirements Documentation with Gherkin

Blog: Requirements and Gherkin

We have been kicking off our ePRO study (more on this study later) and there are a range of partners in the study. We needed to quickly progress to a level of detail in requirements to ensure common ground even before we get IP and ethics in place.

For this work, I have chosen to use Gherkin, trying to drive some behaviour driven development (BDD). We are not expecting that we will use Gherkin throughout the development — each of the partners has their own methods and tooling and Gherkin / Cucumber is not used. Gherkin does provide a clear and accessible convention to structure requirements as features and scenarios.

Further, as domain specific languages are becoming a popular topic in our lab, it seems fitting to use another one for requirements.

Gherkin Materials

For those not familiar with Gherkin, several people have written about it better than I could here. For example:

  • The source on GitHub: https://github.com/cucumber/cucumber/wiki/Gherkin
  • Cucumber.io: https://cucumber.io/docs/reference
  • Behat has a great description with solid examples: http://docs.behat.org/en/v2.5/guides/1.gherkin.html
  • And there is the cucumber book: https://media.pragprog.com/titles/hwcuc/gherkin.pdf

Gherkin and ePRO

We have completed several drafts of the requirements iterating with our partners. The requirements have grown from a half dozen high-level features to 20 pages of more detailed features and scenarios. We have found it useful to organize the features around several actors and the partner / data streams. This works well, given how the project is structured as well as how the ePRO intervention was conceptualized. These requirements now include details such as example data to help the team members. Gherkin’s structure was well suited for this kind of work and the organization by actor was definitely useful.

Gherkin: Business Readable? Business Writable?

I appreciated a post from Martin Fowler reflecting if domain specific languages should be business readable or business writable. In our project, so far, it has been used in a business readable manner. That is, I have been the primary author and I invited comments from the team. Business owners are providing feedback, questions, changes, and notice areas where there is need for more detail. My role in our team is often a bridging role – I span clinical and information science – and that seems to be a good fit for me using a language like Gherkin.

Next Steps

This work will feed into some user centred design and testing cycles for ePRO that I am very much looking forward to starting.

Thoughts on IBM Watson

I recently attended a presentation by John Boyer, a Chief Architect on IBM’s Watson project. The presentation was about the Watson technology that IBM has, and continues to, develop. This was the second Watson presentation I have attended, the first was at CASCON 2014 where an IBM VP gave a keynote lecture. I must admit, after both presentations I walked away feel rather disappointed.

I must preface the remainder of this post by stating that this represents my opinion based on the information I have gathered through these presentations and reading the news. I could be entirely wrong.

The feeling of disappointment stems from what I thought Watson was. Before attending these presentations I had watched Watson play jeopardy and read multiple news stories about this fantastic new technology, it felt like Watson was a huge step forward, and it was going to meet all of our analytics needs for all of eternity and make me coffee every morning. I expected these presentations to dazzle and amaze me with new and exciting machine learning techniques or to provide some deep philosophical insight regarding AI. After all, IBM must some “secret sauce” that makes this tech work, right?

In retrospect, it is not surprising that I was disappointed, I set myself up for it. Anytime someone attributes a “god-like” status to anything, technology or other, they will inevitability be disappointed in some respect.

As far as I can tell, Watson is an aggregation of the numerous machine learning and data mining techniques developed over the last 50 or so years. Of course, IBM has improved and tuned these approaches. However, in the end they are relying the same basic approaches that the rest of the machine learning and data mining community rely upon. These have been combined in to support exactly one task: answering questions.

The entire purpose of Watson is to answer the questions based on the (potentially vast) amount of information available. This approach has allowed IBM to focus their efforts, there are numerous tasks that Watson can’t, and never will be able to, do, simply because it wasn’t designed for them. Watson isn’t some fantastically brilliant AI using some revolutionary algorithm or technique, it is just a (really big) program, and an excellent example of what contemporary machine learning is capable of at scale.

Watson is, in my opinion, a feat of engineering. Given that it uses the current state-of-the-art machine learning techniques, and not some IBM secret sauce, then it is an exceptionally well executed version of a machine learning tool. A tremendous amount of engineering effort has gone into bringing together different machine learning approaches and unifying them into package that can be deployed in a breadth of application domains (health care, finance, business management, security etc…). The complexity of a single domain, if not managed properly, is often enough to cripple most software projects; Watson takes this complexity in stride and provides value to its customers.

I think the most interesting thing about Watson, isn’t Watson itself but its application to different domains. IBM has taken approaches that have existed primarily in academia and made them accessible, on a huge scale, to the world. The value of machine learning and data mining techniques increases as we increase the amount of data processed. Watson will allow us process tremendous amounts of information, and to really understand the impact of machine learning in different application domains.

Finally, I think Watson provides us with the opportunity to understand the limitations of current machine learning techniques. What problems with these approaches will manifest at scale? At what point is machine learning no longer viable, how will we know when we reach that point, and what will we do after that?

In conclusion, I am disappointed that Watson isn’t some out of this world piece of technology, I really expected to be blown away. Upon reflection, this appears to have been a naive opinion, life lesson: “there is no silver bullet”. However, Watson is an exceptional tool that will change how we approach problems in numerous domains, but it is just that: a tool.

Deep Work

Over the past few weeks, I have read Cal Newport’s new book Deep Work.

As a tenure track assistant professor there are certain parallels and I found it an interesting read. I think it has relevance to the work of our lab and how and what we are trying to accomplish. I have read Cal’s previous book and quite enjoyed it and so looked forward to this one coming out in January 2016.

Cal’s basic premise in this book is that there are two types of work: Deep Work and Shallow Work.
Deep work comes about as you develop an ability to focus and is a skill that is increasingly important and harder to naturally develop in this connected time. As he says deep work is the kind of work that gets you tenure (or promotions) while not doing shallow work, he somewhat jokingly said, would be what gets you fired. Thus we all need strike a balance between these. Here are more detailed descriptions of the two:

Deep Work

The kind of work that Cal talks about as deep work are the complex tasks that require high skill and high focus. For him, that is theoretical computer science proofs. For me, that would be paper writing, study design, the qualitative analysis and careful synthesis I like to do related to health information systems / health systems and user experience. This is the work where you might experience flow and is the type of work that crumbles with interruptions.

Shallow Work

Shallow work are the interruptions. They may be important and often can be urgent.
Classic shallow work are things like: quick emails, scheduling meetings and often meetings themselves.

Deep Work routines

I like how Cal describes a number of deep work patterns he has seen be effective, from the “every day routine” to the go away, build a stone tower, and stay there for weeks every year to get the deep thinking done. I have used both (well, not a stone tower, but I went away to write my thesis. No clinic, no classes, nothing but 1000+ words a day, repeat until done).

My take aways from the book.

I think it is worth reading, especially if you are a grad student looking to get chunks of work done on your thesis and papers

My clinic work does not fit into deep vs shallow very well. It is a form of deep work in that it requires focus and knowledge and years of training but it is something that I do not get the luxury of focusing on one patient for 90+ minutes. Instead, I’m seeing 30+ patients in a day. It isn’t Cal’s classic deep work but it is not shallow work either. It is a different rhythm for me. I think that clinic work has helped me with the general technique of focusing but not sustaining that focus on a single topic. For that I can look at some of my other practices. I found that I was already doing several things in the book to promote deep work.

  1. I have a weekly / daily routine with deep work rhythms.
  2. I ensure a good chunk of deep work time each day (that is usually first thing in the morning before people are up in my house). I also block of chunks of deep work (often later in the week) and try to cluster meetings and clinical time in other parts of the week so I can do a bit of “mode shifting”.
  3. I have reduced many of the common attention sinks / alerts in my day (e.g. no buzz on my phone for every email).

I found I could do more – and that “more” for me meant focusing on my shallow work.

  1. I cluster meetings where ever I can (but I still have a lot of meetings)
  2. Email: While I have turned off automatic checking and bells and whistles when email arrives, I do check more often (manually) and I am often triggered by the little icon showing how many messages there are to be processed (rarely at zero these days).
  3. Timing: Cal does not get into timing too much but I know for me my best deep work is done early and I will work more to ensuring the mornings have 1-2 chunks of deep work before I get “too shallow” with calls, emails, etc.

My actions:

  1. Email notifications: I will move my mail app off my task bar and turned off the notification badge. That seems small but now I cannot see the 28 emails pending – even though it wasn’t pinging me actively, I would find it impossible to look as I moved between apps.
  2. Meeting my Email: I have fit email into my schedule whenever, thinking it small and squeezable. It’s true it is and that seems to work for days with meetings. HOWEVER, it can distract on days where I want to get into thinking rhythms. If it IS a habit to check email while, say, boiling the kettle, then I’ll be more likely to do that when I am in deep thought and boiling the kettle. Instead of getting a cup of green tea being a quiet movement away from a deep problem it becomes a phase shift. By booking email slots in my day, I can be more conscious of those shifts.
  3. One of the things that has been on my “deep work list” (long before this book) is to take a reading holiday. That is, take work time to get away and take with me a set of books / papers that address a big issue and not let myself be distracted for a large chunk of time. Bill Gates was known to do this and I have been meaning to try this. That will be one of my actions – maybe not for a full week, but at least several days to start.

Formalizing the meaning of prescriptions

Medication prescriptions are an important intervention in the healthcare process. Computerized systems are increasingly used to enter and communicate prescriptions, so called Computerized Provider Order Entry (CPOE systems). Current CPOE systems use a varying degree of structuredness for entering and communicating prescriptions. They range from free text to completely structured entry. The benefit of structured prescription entry is that computers are able to (partially) interpret prescriptions and check their validity and safety, e.g., for example with respect to the latest medical practice guidelines and potential adverse drug events (drug interactions, allergies, etc.)

Another recently emerging use case for computer interpretable prescriptions are Adherence Monitoring and Improvement technologies. Such technologies are coming on the market to provide caregivers with feedback about how well patients manage to follow their prescriptions and to help patients with increasing their adherence to prescriptions. Adherence monitoring requires a formal, computer interpretable model of the meaning of prescriptions. No such model exists to date. Our lab has conducted research on this topic and proposed a first approach to close that gap. We developed a formalization of prescriptions based on the definition of a graph transformation system. This was done in the context of an honours thesis by Simon Diemert, supervised by Morgan Price and Jens Weber. A paper on this approach has been accepted to the 8th Intl. Conf. on Graph Transformations (ICGT) and will be presented In July in L’Aquila.

How can Clinical Information Systems be certified?

Clinical Information Systems (CIS) such as Electronic Medical Records (EMRs) have become pervasive in modern health care. They have great potential for improving efficiency and outcomes. However, there is also significant published evidence about the risks posed by low quality CIS solutions, with respect to patient safety, security and privacy. As a result stakeholders have called for quality certification and regulation of CIS – and indeed some efforts have been made in this direction. However, the emphasis on pre-market controls (traditionally used for medical devices) does not seem to fit well to these systems. Many quality issues arise only due to interactions of the CIS software with its specific employment environment. Regulators such as the FDA and Health Canada have therefore started to shift focus to post-market controls. To some degree, user experience and incident reporting systems operated by regulators (such as the FDA’s MAUDE) serve this purpose. But anybody who has tried to analyze data from the MAUDE for the purpose of quality surveillance and improvement will have noticed that the information in such systems are very hard to query and analyze. It is not really actionable.

Can we come up with a better way of performing “continuous certification” of CIS? 

It is this problem that Craig Kuziemsky and I have been discussing today at our paper presentation at FHIES/SEHC (hosted by the Software Engineering Institute). We developed a conceptual model for continuous certification and apply it to a case study. The framework is shown in the picture below. You can read about it in our paper

.!image

Translating ideas into (health) software – the safe way

I was invited to give a presentation at the University of British Columbia as part of the Biomedical Engineering (BME) workshop this year. This year’s workshop theme was “translating ideas into practice”. Being a software engineer rather than a biomedical engineer, I chose to present on software engineering concerns, in particular safety and security concerns, related to biomedical software. Here are the slides for my presentation.