Deep Work

Over the past few weeks, I have read Cal Newport’s new book Deep Work.

As a tenure track assistant professor there are certain parallels and I found it an interesting read. I think it has relevance to the work of our lab and how and what we are trying to accomplish. I have read Cal’s previous book and quite enjoyed it and so looked forward to this one coming out in January 2016.

Cal’s basic premise in this book is that there are two types of work: Deep Work and Shallow Work.
Deep work comes about as you develop an ability to focus and is a skill that is increasingly important and harder to naturally develop in this connected time. As he says deep work is the kind of work that gets you tenure (or promotions) while not doing shallow work, he somewhat jokingly said, would be what gets you fired. Thus we all need strike a balance between these. Here are more detailed descriptions of the two:

Deep Work

The kind of work that Cal talks about as deep work are the complex tasks that require high skill and high focus. For him, that is theoretical computer science proofs. For me, that would be paper writing, study design, the qualitative analysis and careful synthesis I like to do related to health information systems / health systems and user experience. This is the work where you might experience flow and is the type of work that crumbles with interruptions.

Shallow Work

Shallow work are the interruptions. They may be important and often can be urgent.
Classic shallow work are things like: quick emails, scheduling meetings and often meetings themselves.

Deep Work routines

I like how Cal describes a number of deep work patterns he has seen be effective, from the “every day routine” to the go away, build a stone tower, and stay there for weeks every year to get the deep thinking done. I have used both (well, not a stone tower, but I went away to write my thesis. No clinic, no classes, nothing but 1000+ words a day, repeat until done).

My take aways from the book.

I think it is worth reading, especially if you are a grad student looking to get chunks of work done on your thesis and papers

My clinic work does not fit into deep vs shallow very well. It is a form of deep work in that it requires focus and knowledge and years of training but it is something that I do not get the luxury of focusing on one patient for 90+ minutes. Instead, I’m seeing 30+ patients in a day. It isn’t Cal’s classic deep work but it is not shallow work either. It is a different rhythm for me. I think that clinic work has helped me with the general technique of focusing but not sustaining that focus on a single topic. For that I can look at some of my other practices. I found that I was already doing several things in the book to promote deep work.

  1. I have a weekly / daily routine with deep work rhythms.
  2. I ensure a good chunk of deep work time each day (that is usually first thing in the morning before people are up in my house). I also block of chunks of deep work (often later in the week) and try to cluster meetings and clinical time in other parts of the week so I can do a bit of “mode shifting”.
  3. I have reduced many of the common attention sinks / alerts in my day (e.g. no buzz on my phone for every email).

I found I could do more – and that “more” for me meant focusing on my shallow work.

  1. I cluster meetings where ever I can (but I still have a lot of meetings)
  2. Email: While I have turned off automatic checking and bells and whistles when email arrives, I do check more often (manually) and I am often triggered by the little icon showing how many messages there are to be processed (rarely at zero these days).
  3. Timing: Cal does not get into timing too much but I know for me my best deep work is done early and I will work more to ensuring the mornings have 1-2 chunks of deep work before I get “too shallow” with calls, emails, etc.

My actions:

  1. Email notifications: I will move my mail app off my task bar and turned off the notification badge. That seems small but now I cannot see the 28 emails pending – even though it wasn’t pinging me actively, I would find it impossible to look as I moved between apps.
  2. Meeting my Email: I have fit email into my schedule whenever, thinking it small and squeezable. It’s true it is and that seems to work for days with meetings. HOWEVER, it can distract on days where I want to get into thinking rhythms. If it IS a habit to check email while, say, boiling the kettle, then I’ll be more likely to do that when I am in deep thought and boiling the kettle. Instead of getting a cup of green tea being a quiet movement away from a deep problem it becomes a phase shift. By booking email slots in my day, I can be more conscious of those shifts.
  3. One of the things that has been on my “deep work list” (long before this book) is to take a reading holiday. That is, take work time to get away and take with me a set of books / papers that address a big issue and not let myself be distracted for a large chunk of time. Bill Gates was known to do this and I have been meaning to try this. That will be one of my actions – maybe not for a full week, but at least several days to start.

Comments of Software Design Processes

Repost from Simon’s Blog

Last week I attended a presentation by Brendan Murphy from Microsoft Research – Cambridge. Dr. Murphy presented on his research regarding software development processes at Microsoft. This post contains a summary of the presentation and my thoughts on the material presented.

The discussion focused on the impact of particular architectural and processes decisions. For example, how does the choice of architecture effect the deployment of new source code. Consider a micro-service architecture in which several independent components are working in concert; it is likely that each of these components has a unique code base. Deployment of updates to one component must consider the effects on other components, i.e. a loss or change of a particular functionality may cause problems in other components. Perhaps using a more monolithic architecture reduces this concern however software monoliths are known to have their own problems.

The way that we, as software engineers and developers, manage variability in source code is through branching. Code repositories often consist of a number of related branches that stem from a “master” or main version of the code, branches often contain a specific feature or refactor effort that may eventually be merged back into the main branch. This concept is familiar for anyone who uses contemporary version control methods/tools such as Git.

In his presentation, Dr. Murphy discussed a number of different branching models, some of which have been experimented with at Microsoft at scale. Different approaches have pros and cons and will support different architectural and deployments in different ways. Of course, when considering the effect of different models it is important to be able to measure properties of the model. Murphy presented several properties: productivity (how many often in time files are changed), velocity (how quickly changes/features move through the entire process), quality (number defects detected and corrected in the process), and coupling of artifacts (which artifacts often change together). These can be used to evaluate and compare different branching models in practice.

The most interesting part of the presentation, from my point of view, was the discussion of the current work based on branching models and these properties. In theory, a control loop can be used to model an evolving code repository. At every point in time the properties are measured and then feed into a decision making unit which attempts to optimize the branching structure based on the aforementioned properties. The goal is to optimize the repository branching structure. Murphy indicated that they are currently testing the concept at Microsoft Research using repositories for various Microsoft products.

The main takeaways from the presentation were:

  • Deployment processes must be planned for as much, if not more, than the software itself.
  • Architecture will impact how you can manage software deployment.
  • Repository branching models have an effect on version management and deployment and need to be designed with care and evaluated based on reasonable metrics.
  • A control loop can theoretically be used to evolve a repository such that there is always an optimal branching model.

From my personal experience working as both a contractor and employee the point of planning for deployment is entirely correct. If software isn’t deployed or unaccessible to its users it is as if it doesn’t exist. Deployment must be considered at every step of development! The presentation did cause me to pause and consider the implications of architecture affecting the patterns of deployment and required maintenance. As we continue to move towards architectures that favour higher levels of abstraction, in my opinion, one of the primary concepts every software engineer should embrace, we will need to find ways to manage increasing variability between the abstracted components.

The notion of the control loop to manage a branching model is quite interesting. It seems that we could, in theory, use such a method to optimize repositories branching models, but in practice the effects of this might be problematic. If an optimum is able to be found quickly, early on in a project, then it seems that this is a good idea. However, if weeks and months go by and the branching model is still evolving this might cause problems wherein developers spend more time trying to adapt to a continually changing model, rather than leveraging the advantages of the supposed “optimum” model. However, a continually changing model may also force development teams to be more aware of their work which has benefits on its own. Really this idea needs to be tested at scale in multiple environments before we can say more, an interesting concept nonetheless.

References and Resources

A Note About CSS Inherit

For those of you who have never worked with web technologies allow me to say one thing: it’s the wild west out here. The novice web programmer must navigate a myriad of new hazards when coming from comfortable homes like C and Java. Just a few of the dangers include: teetering technology stacks, silent failures across multiple machines logging vital debugging information in obscure locations, and niche technology which will solve your problem today for a maintainability nightmare in the future.

Now, I may not be able to call myself a web dev novice anymore but as I begin to see the edges of the big picture I am amazed at the total lack of consistency in implementation and documentation. While working on my current project I’ve examined quite a few different technologies and paradigms but today’s issue comes back to one we thought we could trust, CSS.

CSS, or Cascading Style Sheets, is used primarily for setting the visual style of a markup language like HTML. It allows manipulation of a set of properties such as height, width, line colour, text alignment, and many more. CSS was developed by the World Wide Web Consortium (W3C), a body which maintains international standards for the web.

In recent years has become an invaluable resource for new web devs to start investigating the tools they use. W3schools is in no way affiliated with the W3C but has become the default landing place for new comers with it’s resource pages and easily digestible code snippets. When searching CSS usage and keywords it’s hard not to end up on w3schools but the Mozilla Developer Network (MDN) is considered a advanced resource for more experienced developers. I mention these organizations because they are the leading providers of CSS documentation with the exception of the difficult to understand specification offered directly from W3C.

Now that we understand what CSS is and where to go to find details let’s consider the following example code with no styling added:

<div class='container'>
    <div class='row'>
        <div class='col-md-6'>
            <button type='button'></button>

The scenario is fairly straightforward, there is a specific control, with id Dataview, on the page which will view the data to be displayed on the page. Into the Dataview DOM object we will add the above code. A div container to encapsulate everything, a number of div rows each containing a number of div columns, for the sake of this example we’ll be building a 2×2 grid, and in each grid cell a datum. Some of you may recognize the classes as belonging to the Twitter Bootstrap framework (or similar frameworks like Skeleton) but that has no impact on this example. During development it became apparent that navigation controls would be required. Our task today is to turn one of the grid cells into a control panel with a back button to begin with.

We can accomplish this by adding some simple style properties to the html:

<div class='container' style= 'height:inherit; width:inherit;'>
    <div class='row' style= 'height:50%; width:inherit;'>
        <div class='col-md-6' style='height:inherit; width:50%'>
            <button type='button' style='height:inherit; width:inherit'>Back</button>

As you can see we’ve used the height and width styles and either indicated a value relative to the parent or to inherit directly from the parent element. This may seem almost too easy but the nuance of inherit is what brought us here today. First let’s review the documents. w3schools has little to say about the inherit property “The inherit keyword specifies that a property should inherit its value from its parent element.”[1]. Still curious I looked around a little more and found that the MDN had a little more precise information “The inherit CSS-value causes the element for which it is specified to take the computed value of the property from its parent element.”[2]. They key word here is “computed” and let’s break down the above snippet in detail to understand why.

Remember, this code is a snippet of a 2×2 data grid. The top container inherits the size of what contains it, meaning that the “container” div is the full size of the parent element. The “row” div should be the full width as the parent but only half the height to allow room for the second (not featured) row. The “col” div should be the same height as the row but the width divided across all columns, once again half. Lastly the button should fill the column that contains it. That is NOT what the above snippet does. The above results in columns that are only 25% the height of the total container div and the button is far too small (height of 12.5% and width is 25% of the total container). Why? Because inherit does not inherit the computed value but instead the actual style property.

Lets take another look at that snippet with the inherit styles replaced with the literal style they inherit.

<div class='container' style='height:inherit; width:inherit;'>
    <div class='row' style='height:50%; width:inherit;'>
        <div class='col-md-6' style='height:50%; width:50%'>
            <button type='button' style='height:50%; width:50%'>Back</button>

With this we can see that the column would have a height of a quarter of the container. The button would have a height of half of that. Clearly this isn’t what we set out to do. And while the w3schools definition remains sufficiently vague to not really address this problem the MDN definition seems completely backwards about using the computed value. So far I haven’t been able to discover exactly why this happens. I was lucky in that my framework was still very simple when I encountered the issue and it happened with nice easy to visualize values. This may just be typical of the web development process though, so much to learn and so little to rely on.

The solution is to be explicit about the relative size of each element.

<div class='container' style='height:inherit; width:inherit;'>
    <div class='row' style='height:50%; width:inherit;'>
        <div class='col-md-6' style='height:100%; width:50%'>
            <button type='button' style='height:100%; width:100%'>Back</button>

Now that we’ve replaced inherit property with explicitly 100% of the parent property everything displays the way it was meant to. The column div is the same height as the row div (which is in turn half of the container) and the button is the full size of the column.

Web technology allows us to explore amazing ways to transport and display our data but the reliable documents can be few and far between. It’s more important now than ever before for developers to be not only technically capable but also creative and resourceful because in the world of web dev sometimes the only solutions that exist are the ones you make.

Further Reading:

[1] W3Schools CSS inherit usage and def:
[2] MDN CSS inherit document: Continue reading

Formalizing the meaning of prescriptions

Medication prescriptions are an important intervention in the healthcare process. Computerized systems are increasingly used to enter and communicate prescriptions, so called Computerized Provider Order Entry (CPOE systems). Current CPOE systems use a varying degree of structuredness for entering and communicating prescriptions. They range from free text to completely structured entry. The benefit of structured prescription entry is that computers are able to (partially) interpret prescriptions and check their validity and safety, e.g., for example with respect to the latest medical practice guidelines and potential adverse drug events (drug interactions, allergies, etc.)

Another recently emerging use case for computer interpretable prescriptions are Adherence Monitoring and Improvement technologies. Such technologies are coming on the market to provide caregivers with feedback about how well patients manage to follow their prescriptions and to help patients with increasing their adherence to prescriptions. Adherence monitoring requires a formal, computer interpretable model of the meaning of prescriptions. No such model exists to date. Our lab has conducted research on this topic and proposed a first approach to close that gap. We developed a formalization of prescriptions based on the definition of a graph transformation system. This was done in the context of an honours thesis by Simon Diemert, supervised by Morgan Price and Jens Weber. A paper on this approach has been accepted to the 8th Intl. Conf. on Graph Transformations (ICGT) and will be presented In July in L’Aquila.

Hazard analysis for Safety-critical Information Systems

There is a notable lack of maturity of prospective hazard analysis methodology for safety-critical information systems (such as clinical information systems). Most methods in this area target only the user interface and are limited to usability studies. We have been researching hazard analysis methods to fill this important gap and defined the ISHA (Information System Hazard Analysis) method, based on earlier foundational work in safety engineering. Today, Fieran Mason-Blakely is presenting our paper at FHIES/SEHC 2014. In this paper, we apply ISHA to the EMR-to-EMR episodical document exchange (E2E) standard developed in British Columbia (which is currently under deployment). Check out our paper for details.


How can Clinical Information Systems be certified?

Clinical Information Systems (CIS) such as Electronic Medical Records (EMRs) have become pervasive in modern health care. They have great potential for improving efficiency and outcomes. However, there is also significant published evidence about the risks posed by low quality CIS solutions, with respect to patient safety, security and privacy. As a result stakeholders have called for quality certification and regulation of CIS – and indeed some efforts have been made in this direction. However, the emphasis on pre-market controls (traditionally used for medical devices) does not seem to fit well to these systems. Many quality issues arise only due to interactions of the CIS software with its specific employment environment. Regulators such as the FDA and Health Canada have therefore started to shift focus to post-market controls. To some degree, user experience and incident reporting systems operated by regulators (such as the FDA’s MAUDE) serve this purpose. But anybody who has tried to analyze data from the MAUDE for the purpose of quality surveillance and improvement will have noticed that the information in such systems are very hard to query and analyze. It is not really actionable.

Can we come up with a better way of performing “continuous certification” of CIS? 

It is this problem that Craig Kuziemsky and I have been discussing today at our paper presentation at FHIES/SEHC (hosted by the Software Engineering Institute). We developed a conceptual model for continuous certification and apply it to a case study. The framework is shown in the picture below. You can read about it in our paper


Reform of Food and Drugs act also impact Medical Devices

The proposed bill C-17 to modernize the Canadian Food and Drugs Act has received media attention mainly with respect to its implications on drug safety. It will provide the government with more powers, including the power to recall drugs from the market. However, bill C-17 also applies to medical devices, including software-based medical devices. The bill also puts in place a mandatory reporting requirement of adverse events using drugs and medical devices. This is a step in the right direction. Now all me need is a budget to empower the government to enforce the new regulation.

Bidirectional Transformations (BX)

Bidirectional Transformations (BX) are a specific type of transformations of particular interest for many applications in software and information system engineering. This Winter I co-organized a one week seminar on BX theory and applications at the Banff International Research Station (BIRS). BIRS was an excellent venue and the seminar was quite worthwhile, as it provided a way of getting leading researchers from different communities to exchange their ideas and theories (despite arctic temperatures of -20 to -40 C) . A report on the seminar is now published at the BIRS Web site.

The next BX workshop will be coming up in Athens as part of the EDBT/ICDT joint conference. There I will be co-presenting a paper in the application of BX in support of information system reengineering.