Category Archives: Quality

Debriefing on the Apple-FBI Debacle: The Aftermath

The Event

As many of you may have heard, in February the FBI requested that Apple make a modification to their systems to allow them to have access to an encrypted iPhone — which swiftly invoked the ire of the security community. Many experts asked why the FBI would even ask such a “ludicrous” and “short-sighted” question. They questioned the FBI’s understanding of basic encryption principles and quipped that the decision must have been made by a politician since no security expert would have made such a request. They further pointed to the past revelations about Snowden’s leaks and how many government associations have recently (and continue to) abuse the powers they have been given in this area.

Many worried that such a request would set a precedent, and even the FBI director admitted that it most likely would.

Apple responded in kind and denied the request. This signaled the start of significant political posturing by both players to garner support for their cause. The security community and many industry leaders quickly sided with Apple.

Ultimately the FBI elected to contract a third party who used an unknown exploit to gain access to the device. Both parties ceased their posturing and stood down.

The Aftermath

Three months later we observe the resulting fallout of this debacle.

Before this event even happened, trust in the U.S. government was near an all-time low, and this event did little to instill further confidence.

Beyond this, some have claimed that Apple “won” the battle as they did not provide the requested modifications — The FBI found another way around that did not require a court decision. This may be premature.

What Apple did do was prevent (for the time being) the FBI from having easy access to any iPhone they want in a systemic manner. This is important since the previous abuses of power stemmed from easy and systemic access to data (with or without warrants.

Prevention of such access requires the FBI to treat each case as its own separate issue:

  • Get the physical device
  • Get a warrant to gain access
  • Breach that specific device
  • Search for the evidence they need

The difference is in the difficulty and the case-by-case nature. It is not easy to do this process for 100,000 devices at one time. And it should not be. That is beyond the scope of a single case and some would say borders on surveillance-level.

As with most things, there is no black-and-white. There is always a trade-off between privacy and security, and it is not always clear which path should be taken.

What must be considered is that each breach of privacy for sake of security is a one-way transition. By breaching privacy through court-order, or through legislation, a precedent is set. This makes subsequent and wider-scope breaches easier. Alas there is an opportunity cost to our drive for total security. By resisting the FBI, Apple has resisted the erosion of privacy on a systemic level. Access to the device was still gained, but not through systemic means.

Furthermore, this event has heightened the awareness of privacy in the general populace. This can only be taken as a good sign. One should take their privacy seriously and be critical of any institution that intends to infringe on it. Not only did individuals take measures to secure their privacy, but other institutions did too. WhatsApp decided to strengthen their encryption as a direct result of the Apple case.

Unfortunately, the lack of precedent also means that this particular conflict may arise again in the future. And that future may include some event that spurs the government to forgo long-term thinking and rationality for a short-term solution to their immediate problems… Because, that hasn’t happened before.

All-in-all the result is favourable to both parties. Apple is enshrined as an advocate for privacy and maintains its position without alienating their consumers, while the FBI was able to ultimately gain access to the device in question.

As for the future? No-one knows.

Comments of Software Design Processes

Repost from Simon’s Blog

Last week I attended a presentation by Brendan Murphy from Microsoft Research – Cambridge. Dr. Murphy presented on his research regarding software development processes at Microsoft. This post contains a summary of the presentation and my thoughts on the material presented.

The discussion focused on the impact of particular architectural and processes decisions. For example, how does the choice of architecture effect the deployment of new source code. Consider a micro-service architecture in which several independent components are working in concert; it is likely that each of these components has a unique code base. Deployment of updates to one component must consider the effects on other components, i.e. a loss or change of a particular functionality may cause problems in other components. Perhaps using a more monolithic architecture reduces this concern however software monoliths are known to have their own problems.

The way that we, as software engineers and developers, manage variability in source code is through branching. Code repositories often consist of a number of related branches that stem from a “master” or main version of the code, branches often contain a specific feature or refactor effort that may eventually be merged back into the main branch. This concept is familiar for anyone who uses contemporary version control methods/tools such as Git.

In his presentation, Dr. Murphy discussed a number of different branching models, some of which have been experimented with at Microsoft at scale. Different approaches have pros and cons and will support different architectural and deployments in different ways. Of course, when considering the effect of different models it is important to be able to measure properties of the model. Murphy presented several properties: productivity (how many often in time files are changed), velocity (how quickly changes/features move through the entire process), quality (number defects detected and corrected in the process), and coupling of artifacts (which artifacts often change together). These can be used to evaluate and compare different branching models in practice.

The most interesting part of the presentation, from my point of view, was the discussion of the current work based on branching models and these properties. In theory, a control loop can be used to model an evolving code repository. At every point in time the properties are measured and then feed into a decision making unit which attempts to optimize the branching structure based on the aforementioned properties. The goal is to optimize the repository branching structure. Murphy indicated that they are currently testing the concept at Microsoft Research using repositories for various Microsoft products.

The main takeaways from the presentation were:

  • Deployment processes must be planned for as much, if not more, than the software itself.
  • Architecture will impact how you can manage software deployment.
  • Repository branching models have an effect on version management and deployment and need to be designed with care and evaluated based on reasonable metrics.
  • A control loop can theoretically be used to evolve a repository such that there is always an optimal branching model.

From my personal experience working as both a contractor and employee the point of planning for deployment is entirely correct. If software isn’t deployed or unaccessible to its users it is as if it doesn’t exist. Deployment must be considered at every step of development! The presentation did cause me to pause and consider the implications of architecture affecting the patterns of deployment and required maintenance. As we continue to move towards architectures that favour higher levels of abstraction, in my opinion, one of the primary concepts every software engineer should embrace, we will need to find ways to manage increasing variability between the abstracted components.

The notion of the control loop to manage a branching model is quite interesting. It seems that we could, in theory, use such a method to optimize repositories branching models, but in practice the effects of this might be problematic. If an optimum is able to be found quickly, early on in a project, then it seems that this is a good idea. However, if weeks and months go by and the branching model is still evolving this might cause problems wherein developers spend more time trying to adapt to a continually changing model, rather than leveraging the advantages of the supposed “optimum” model. However, a continually changing model may also force development teams to be more aware of their work which has benefits on its own. Really this idea needs to be tested at scale in multiple environments before we can say more, an interesting concept nonetheless.

References and Resources

Hazard analysis for Safety-critical Information Systems

There is a notable lack of maturity of prospective hazard analysis methodology for safety-critical information systems (such as clinical information systems). Most methods in this area target only the user interface and are limited to usability studies. We have been researching hazard analysis methods to fill this important gap and defined the ISHA (Information System Hazard Analysis) method, based on earlier foundational work in safety engineering. Today, Fieran Mason-Blakely is presenting our paper at FHIES/SEHC 2014. In this paper, we apply ISHA to the EMR-to-EMR episodical document exchange (E2E) standard developed in British Columbia (which is currently under deployment). Check out our paper for details.

!image

Reform of Food and Drugs act also impact Medical Devices

The proposed bill C-17 to modernize the Canadian Food and Drugs Act has received media attention mainly with respect to its implications on drug safety. It will provide the government with more powers, including the power to recall drugs from the market. However, bill C-17 also applies to medical devices, including software-based medical devices. The bill also puts in place a mandatory reporting requirement of adverse events using drugs and medical devices. This is a step in the right direction. Now all me need is a budget to empower the government to enforce the new regulation.