Saturday, June 18, 2016

TDD of a React-Redux connector

TLDR


React is a framework for building reusable components. Redux is a library to store and manage changes to web application state. Wouldn't it be nice to use them together? 

You might want to read a couple of previous blogs about react and redux first.

Design

We used the "react-redux" connector. This connector lets you wrap redux state and dispatch methods around a react component.

Test Strategy

I contemplated a few different ways to test react and redux. The two that seemed the most interesting were:
  • Perform a "shallow render" of the component and check appropriate parameters were being passed to the login form component.
  • Perform a full render of the component, trigger the submit button, and check that the store is appropriately modified.
While the first of those two is conceptually cleaner, I went with the second one due to context injection. Basically, by this stage I'd moved the loginService object to be passed as part of the react context instead of being a parameter - as passing the loginService as a parameter was starting to seem ugly. To inject the right context for the tests, I had to wrap the redux login form inside a context injector object. I also had to wrap the redux login form in a "provider" react-redux component. This is part of the react-redux library - and it provides the store to the components under it in the component tree.

As a side effect, the redux wrapper was not rendering as part of a shallow render. Sigh. 

So here are my imports and test setup method:

import {mount} from 'enzyme';
import ContextContainer from "./ContextContainer.jsx";
import LoginService from "../../src/services/LoginService.jsx"
import React from 'react';
import ReduxLoginForm from "../../src/components/ReduxLoginForm.jsx";
import {Provider} from "react-redux";
import {createStore} from "redux";
import reducers from "../../src/model/combinedReducers.jsx";


beforeEach(() => {
  loginService = new LoginService();
  store = createStore(reducers);

  loginForm = mount(
    <Provider store={store}>
      <ContextContainer loginService={loginService}>
        <ReduxLoginForm />
      </ContextContainer>
    </Provider>
  );
});

Tests

There was only one test for this component - that the form sends a dispatch method to the store if the login is successful. Note the way I'm mocking a successful promise - I found that using an actual promise object caused me issues related (I assume) to asynchronous behaviour. I'm also not happy with the way I'm invoking the form submission - it relies on knowledge about the login form, but this test is not for the login form.

it("generates a dispatch to the store on successful login", () => {
  var successPromise = { then: (s,r) => s("test token")};
  spyOn(loginService, "login").and.returnValue(successPromise);

  expect(store.getState().authentication.get("loggedIn"))
    .toEqual(false);

  loginForm.find("button.submitLogin").simulate("click");

  expect(store.getState().authentication.get("loggedIn"))
    .toEqual(true);

  expect(store.getState().authentication.get("token"))
    .toEqual("test token");
});

Code

In case you are interested, here is the code for the ReduxLoginForm

import LoginForm from "./LoginForm.jsx";
import {connect} from "react-redux";
import {createLoginAction} from "../model/authenticationReducer.jsx";

var dispatchMap = dispatch => {
  return {
    onLogin: (username, password) => {
      dispatch(createLoginAction(username, password));
    }
  };
};

var propsMap = () => {
  var props = {
  };
  return props;
};

var ReduxLoginForm = connect(propsMap, dispatchMap)(LoginForm);

export default ReduxLoginForm;


Test Driven Development of a Redux Store

TLDR:

Redux is a JavaScript library that is really useful to store and manage state of a application. It has 3 important concepts:
  • The state is the current snapshot of the data.
  • A store contains the current state.
  • A reducer takes an event and produces a new snapshot of the data
Externally, Redux is based on the "Observable" pattern (the "M" of MVC) where interested objects register as being interested when the store changes state. Internally, it contains a current state that does not change and does not provide setters on the state. 

Instead, to change the state you pass an event to the redux store (like "item deleted"). The store itself then creates an entirely new state by invoking all the reducers and notifies observers.

A really useful thing from a TDD perspective is that a store can have multiple reducers. Each reducer is responsible for managing an isolated part of the state - and events are sent to all reducers. This is useful because we can test reducers independently.

The Tests

To continue the example from the React Login Page, I decided to first build a simple store for authentication data. It needed to store a boolean "loggedIn" flag and an authentication token. After a bit of thought, I decided on the following tests:
  • Default value of logged in and token flags
  • Processing a "loggedIn" event should set the flag and token
  • Processing a "loggedOut" event should set the flag and remove the token.
  • The state is immutable
(note that each of those is actually two tests).

For those who are interested, my imports are:
import { createStore } from 'redux';
import reducer, {createLoginAction, createLogoutAction} from "../../src/model/authenticationReducer.jsx";

The rationale behind those imports is:
  • I want to export the reducer by default - not a created store. 
    • This lets me combine the reducer into a bigger store elsewhere in my application. 
    • It also means that I have to create the store in my app - so that the module can't return a hacked store. It has to return a proper reducer.
    • Finally, it means I don't have to test that the store being returned does the event notification properly.
  • I felt that the methods to create the login and logout actions belonged in the same file as the reducer.
    • That way I can have private constants for the events and all logic is in the same place. 
    • It also means my tests don't need to know the internals of the events.
In addition, I used the javascript "Immutable" library as it makes copying an entire state easy.

Default Values

This test is reasonably trivial - but important. By default the user should be logged out.

it("should be logged out by default", () => {
  var store = createStore(reducer);
  expect(store.getState().get("loggedIn")).toEqual(false);
  expect(store.getState().get("token")).toBeUndefined();
});

Logging In

Again, this is a reasonably trivial test.

it("should update the logged in flag and the token on an incoming 'login' event", () => {
  var store = createStore(reducer);
  store.dispatch(createLoginAction("12345"));

  expect(store.getState().get("loggedIn")).toEqual(true);
  expect(store.getState().get("token")).toEqual("12345");
});

Logging Out

Logging out was marginally more interesting because I had to get the store into a logged in state first. While I don't like calling the login method - because it means that a broken login could break this test - it seemed the cleanest way to achieve my need.

it("should update the logged in flag and the token on an incoming 'logout' event", () => {
  var store = createStore(reducer);

  // have to log in first!
  store.dispatch(createLoginAction("12345"));

  store.dispatch(createLogoutAction());

  expect(store.getState().get("loggedIn")).toEqual(false);
  expect(store.getState().get("token")).toBeUndefined();
});

Immutability

One of the principles of Redux is that the state itself does not change - instead the reducer produces a new, modified, state. To test this, I changed the "modified" test to keep a reference to the store post-login then check the state has not changed after logout.

it("should not modify the previous state", () => {

  // have to log in first!
  store.dispatch(createLoginAction("12345"));
  var loggedInState = store.getState();

  store.dispatch(createLogoutAction());

  expect(store.getState().get("loggedIn")).toEqual(false);
  expect(store.getState().get("token")).toBeUndefined();
      
  expect(loggedInState.get("loggedIn")).toEqual(true);
  expect(loggedInState.get("token")).toEqual("12345");
});

The Code

In case you are interested, here is the code for the login reducer:

import Immutable from "immutable";

const defaultState = new Immutable.Map({
  loggedIn: false
});

const LOGIN_ACTION = "authentication.LOGGED_IN";
const LOGOUT_ACTION = "authentication.LOGGED_OUT";

export default (state = defaultState, action) => {
  switch (action.type) {
    case LOGIN_ACTION:
      return state.set("loggedIn", true).set("token", action.token);

    case LOGOUT_ACTION:
      return state.set("loggedIn", false).set("token", undefined);

    default:
      // console.log("Ignoring action: " + action.type);
      return state;
  }
};

export const createLoginAction = token => {
  return {
    type: LOGIN_ACTION,
    token: token
  };
};

export const createLogoutAction = () => {
  return {
    type: LOGOUT_ACTION
  };
};



Friday, June 17, 2016

TDD of a react component


TLDR: 

Building a react component using TDD forces you to focus on the component's interface - and ends up with a cleaner, more well defined component. However, the JavaScript space is so complex that figuring out how to write the tests is a project in itself. This blog explains bits of what you need.

The Tests

Recently at work, I've had to learn the React framework (as well as Redux and a few others). After a bit of googling, the sensible place to start was writing a simple login component. The role of this component is to show two fields: username and password, and submit the data to a loginService (I come from a back end development perspective and have developed a habit of wrapping external integration points in a service).

The tests I decided on were:

  • The component has a username field
  • The component has a password field
  • The component has a submit button
  • The component calls the login service's login method when submit is pressed
  • The component passes the value of username to the login service
  • The component passes the value of password to the login service
  • If login is successful then the component redirects to "/app"
  • If the login is not successful then the component displays an error.
These tests can, broadly, be broken down into three categories: structural, interaction start, and interaction end.

Test Setup

For those who are interested, here are my imports:
import React from 'react';
import LoginForm from '../../src/components/LoginForm.jsx';
import {mount} from 'enzyme';
import LoginService from "../../src/services/LoginService.jsx"

I use Karma/Jasmine for the test runner and, in addition, have webpack setup to produce a deployable bundle including the tests so I can run the tests in my user's web browser if they are having issues. That tells me which tests are failing in their browser with their set of plugins, configuration, etc, etc.

Structural

My initial approach to writing the structural test was:
  • Render the component using ReactTestUtils
  • Extract the DOM component using react-dom
  • Wrap the DOM in a jQuery object
  • Find the appropriate field and assert there is one of it.
However, after writing the test that way, I found out about the enzyme library. Enzyme, essentially, wraps the first 3 steps into 1 call.

For field tagging, I decided to use the "class" attribute. Could have used "id" attribute instead, but class made sense at the time.

My first test looked something like this:
it("Has a username field", () => {
  // setup
  var noop = () => {};
  var loginForm = mount(<LoginForm/>);

  // test and asserts
  expect(loginForm.find("input.username").length).toEqual(1);
});

Interaction Start

Here I quickly found that setting the value of a field in React/Enzyme is easy. Well, it's easy after an hour or so of using google. Note that I haven't written the loginService yet, so I'm just creating a mock object. Which I'd do anyway - because otherwise this test is not self-contained. Note that I eventually refactored this code to pass the LoginService object as part of the React context. That ended up being a nicer way to inject global singletons as dependancies - passing all of them pervasively through the react tree was going to be a lot of overhead for little gain.
it("Passes the username to the defined login method", () => {
  // setup
  var successPromise = { then: (s,r) => s("test token")};
  var loginService = new LoginService();
  spyOn(loginService, "login").and.returnValue(successPromise);
  var loginForm = mount(<LoginForm loginService={loginService}/>);
  var usernameField = loginForm.find("input.username");
  usernameField.simulate("change", {target: {value: "username"}});

  // test
  loginForm.find("button.submitLogin").simulate("click");

  // asserts
  expect(loginService.login).toHaveBeenCalledWith("username", jasmine.anything());
});

Interaction End

After some thought about this test, I came to the conclusion that the login form should not be responsible for the decision of what to do after the login succeeds. So I added a parameter for the "loginSuccess" function and verified that the function was called.

it("Passes the onLogin method to the 'then' promise returned by the login service", (done) => {
  // asserts
  var loginSuccess = token => {
    expect(token).toEqual("test token");
    done();
  };

  // setup
  var successPromise = { then: (s,r) => s("test token")};
  var loginService = new LoginService();
  spyOn(loginService, "login").and.returnValue(successPromise);
  loginForm = mount(
    <LoginForm onLogin={loginSuccess} loginService={loginService}/>;

  // test
  loginForm.find("button.submitLogin").simulate("click");
});


Friday, February 28, 2014

Governance - Strategy as a key Constraint on Delivery


I often hear the term "we need better governance of project XXX" or "they don't understand the value of good governance" from project managers, members of the PMO (Project Management Office) and the such like. While it's easy to assume that the people saying such things are striving for administrative excellence as opposed to technical excellence (thanks to Jim Highsmith for that wording), I do believe there is a role for good governance. The real question is: what the hell is governance?

Somewhat unusually for a member of a software development team, I also have experience on the board of a couple of different organisations - ranging from an alcoholic beverages company to a non-profit organisation. This makes my take on governance slightly different to the usual "project is meeting cost, time, and quality constraints".

Imagine, for a second if you are responsible to the shareholders of an organisation for performance of a company and the shareholders (or the market analysts) asked you how the company was performing. If you said "We're on track to meet cost, quality, and time constraints" you'd quite rightly be fired. It's a bad way to govern an organisation - and an equally bad way to govern projects.

Governance, for me anyway, is about defining the minimum set of constraints so that the organisation can deliver value. However, that just raises two different questions: what are constraints and what is value. Value is easy to define - it's usually money except in the non-profit sector (better called the "not for loss" sector") where it is some non-financial outcome. Value is generally pretty well defined by the organisation - and if value is not well defined then the organisations governing body knows where to focus first!

From a constraint perspective, I can think of four types of constraints:

  1. Strategy, 
  2. Risk Mitigation, 
  3. Cost, and 
  4. Timeframe. 

It might seem slightly odd that strategy is a constraint - but it makes sense (to me anyway). Richard Rumelt wrote in "good strategy bad strategy" that the kernel of a strategy has three things. First, a description of what is going on in organisations environment. Second, a set of guiding principles for the organisation to track a path through the environment. Third, a set of coherent actions the organisation will perform. Those three things provide a set of constraints that govern what the organisation will do without defining how the organisation will do it.

Even Ross, Weill, and Robertson's architecture for governing technology can be considered a set of constraints on how we integrate data and standardise processes as we implement new business systems.

Anyway, once an organisation has clearly defined the constraints on performance then governance becomes a straightforward exercise (but not easy). We just need to keep asking these questions:

  1. Are we getting the right information to determine if the organisation is adhering to the constraints?
  2. Is the organisation adhering to the constraints - and why not?
  3. Are the constraints still valid?
It's also worth repeating Jim Highsmith's words here: we prefer delivering value over meeting constraints. If we are not delivering value then we should not be in business (well, we won't be in business for much longer anyway). If we have to break constraints along the way then that is a worthy conversation with the board (or the project steering committee) as to the validity of the constraints - or the validity of the business model.



Wednesday, February 26, 2014

Non-linear Software Development Workflow

Software development, like many manufacturing processes, is all about work dependancies. We do one thing so that another person can do their job. For example, testers need working software to start their testing. Well, perhaps they need working software to finish their testing.

Here's the interesting thing. In our team (a scrum team working developing a payment switch) the testers don't need working software to finish their testing. We have managed to remove the dependancy between software development and software testing. Let me explain how.

First, however, waterfall. In 1970 a dude (almost everyone working in CompSci in 1970 was a dude) proposed the waterfall model. In this model, we follow a strict series of steps to produce software. The steps are:

(Ignore for the moment that Royce also said that the waterfall model doesn't work for large software projects - almost everyone else ignored him so we can as well :)

In the waterfall model, we have a strict "finish -> start" dependancy. For example, requirements must finish before design must start. This problem is also present in the iterative methodologies (I also dropped "maintenance" from the flow as maintenance is just another loop around the cycle):


However, then agile came along and some very clever people (I've heard this idea from both Alastair Cockburn and Mick Cohn) realised that the dependancy between each stage is not a "finish-start" dependancy but it is a "finish->finish" dependancy. What that means is that testing can start before development finishes, but testing can't finish before development finishes.


This model is really useful in methodologies that have short time-boxed sprints - like Scrum (which, potentially, is not really a methodology but a "Reflective Improvement Framework" but we'll leave Alastair Cockburn with that interesting definition). It's useful because it means that team members can work mostly in parallel during a sprint.

Now, here is the interesting thing. With methodologies that have short time-boxed sprints your team starts getting really good at breaking down features into tiny stories - little pieces of functionality that deliver value to the user or customer. They represent externally visible changes in system behaviour that can be developed and tested (they also might be tokens for work, but that's a discussion for another time). The most important thing is that they are small. Very small. They can be as small as "text field for name displayed on screen" and then the project can have separate stories for data entry, field validation, security, etc. Some people have been known to slice keyboard and mouse navigation for an interface into different stories.

With stories this small it is entirely feasible for the entire team to sit down and very quickly get a testable shared understanding of the story. Then each discipline can go away and start work. The testers can write automated tests for the story and check them into the system. The developers can write code. The BA can resolve any ambiguity and communicate it to both the testers and developers.

This creates what I call a "shared start dependancy with a deadline". The shared start dependancy is the creation of a testable collective understanding in the team. The deadline is the end of the iteration.


Easy eh! Well no. In our team - where this happens regularly - there were several things we needed before this behaviour emerged. The things were:

  • An automated test suite where testers can specify tests without reference to the user interface. We used Concordion.
  • An automated test suite where testers can check in tests that aren't run by default (to give the developers a chance to build the feature before the continuous build system starts failing the tests).
    • We used the Jenkins CI server. Tests were stored in a Git repository - the testers used SourceTree to check their tests in. 
    • In the Maven build file we told Maven to only run Concordion tests called "indexTest.java" and then referenced active tests from the those files using the "c:run" annotation.
    • One Concordion hint: use Map<String, Object> as your return value for most of your fixtures. Look at returning a map result - but sadly no direct link to the correct section.
  • Maturity in slicing stories smaller and smaller. That took the current team 6 months of development - but we had a very low level of scrum experience when we started.
  • Much collaboration and a realisation that it was possible for a tester to specify the tests before the development started.

Wow. Thanks for reading this far!

Wednesday, December 9, 2009

Usage Centred Design - Modelling Users and their Tasks

Usage Centered Design is a user interface design methodology developed by Larry Constantine and Lucy Lockwood. It has some similarities with Cooper's about-face methodology but uses abstract models instead of concrete ones. This makes it harder to start using but, in my opinion, gives great reasoning power once its understood and used properly.

This blog post describes the first phase of Usage Centered Design - modeling users and their tasks. A later post will describe using them to design a user interface.

Why model users?

The first key insight to understand is that we're not actually designing for users. We're designing to let people do things. This raises two important questions:
  1. How can we best describe the aspects of the people and how they need to interact?
  2. How can we best describe the things that they have to do?
The most well known methodology for describing the people and the things they do is Cooper's about-face methodology (http://www.cooper.com). In this methodology, users are researched and then described using Personas. Personas are precise descriptions of someone who typifies an actual user - with all details about them. They're completely made up. The things they do are described using Scenarios. A Scenario is a precise description of what a Persona might actually do to achieve their goal - it contains much information about the context surrounding the interaction with a system. There are lots of real examples of both on the web:
  • http://chopsticker.com/2007/06/08/download-an-example-persona-used-in-the-design-of-a-web-application/
  • http://www.uiaccess.com/accessucd/scenarios_eg.html
From my perspective, the key criticism of this methodology is that it contains too much detail - and that the detail distracts from the information. This is because the methodology isn't trying to model the users - it's trying to describe them to the most precise level of detail. Usage Centered Design is different because it models only the details that are relevant to user interface design.

How can we model users?
This is really two questions.
  • What relationship between a user and a system do we want to model?
  • What information about that relationship do we want to capture?
In Usage Centered Design we model the role that a user plays when they are interacting with a system and we model how they will interact with the system while they are in that role. This is called a "User Role Model" and is very similar to a Use Case Model (with some additional information about the human actors involved). Constantine and Lockwood define a user role as a set of characteristic needs, expectations, and behaviors that a user can take on when interacting with a system - users play different roles at different times to achieve different goals.

For example, in a Pizza company we can model the users by creating several different roles:
  • Telephone Answerer
  • Order Maker
  • Order Deliverer
  • Staff Roster Maintainer
  • etc
The really important thing about these roles is that the roles are independent of the actual people who work there, their job titles, and the number of staff - these roles must be played in any pizza company. Having these roles (as well as relationships between the roles: order taker versus telephone order taker) lets us reason about the system in an abstract way. This can let us determine how

In addition, we keep some additional information about each role. This can include:
  • The context of use (front of shop, potential interruption by customers)
  • Characteristics of use (customers have a tendancy to change their mind)
  • Criteria (speed, simplicity, accuracy)
In contrast, Cooper's methodology models stereotypical users and captures all information about those users.

How can we model the things that users do?
Essential Use Cases are used in Usage Centered design to model the things that users do. Each use case describes a particular task a user has to do with the system (or a goal they want to achieve with the system).

Essential Use Cases are just like ordinary use cases except:
  • When writing them we have an unholy focus on the minimal, essential interaction that is necessary for the user to achieve their goal. We assume that this use case is all that the user will be using the system for - resolving the navigation between different use cases is done later.
  • They're written in a 2-column format in order to visualize the necessary interaction.
  • We write the use case from the user's perspective - the interactions that they want to have first are first in the use case. If they don't care about order then we only have 1 step.
This focus on the minimal interaction is key. It lets us determine how good our solution is with respect to any one use case - we can count the number of steps in the solution  and compare with any particular use case. It also lets us compare the visual layout of a solution with the order required in the use case.

In contrast, Cooper's methodology models how a particular user might actually achieve a particular goal - with all the contextual information in there.

Aren't these typical Business Analysis artifacts?
Yes, the models used in Usage Centered Design are models that Business Analysts use on a daily basis. The difference is in how the models are used. In Usage Centered Design, the analyst has to have a constant focus on the user and how they're likely to use the system. This might involve a serious amount of user research if the tasks and how they're achieved is ambiguous (or, in a Pizza company, it might not :).

I've found that it's pretty easy to explain the focus of a use case and a user role map to business stakeholders. I've also found that once they've "got it" the type of information I'm getting from them changes - we start talking about requirements instead of solutions!

That's all good, but how do we use them?
 Stay tuned for the next update: Usage Centered Design - using the models.

The Use Case Model

The use case model is, essentially, a visual representation of all the people and computers that will interact with a system and all the tasks that the people and computers can do with the system. In other words, it's a high level view of what a system will do.

This blog describes how to draw a use case model (it's not hard) and, much more importantly, how to reason about a system using a use case model. Because use case models can be drawn very quickly at the beginning of a project, any reasoning we can do with a model can have huge pay-off down the line.

What is a Use Case Model?
The thing I like most about use case models is that they're damn easy to understand. Take my most often used example: a pizza company:



This model tells us that there are two key people using the system - someone who takes orders and someone who makes them. The person who takes the orders can do three things - order a pizza, cancel an order, and deliver a pizza. The person who makes the order can only do one thing - make a pizza.


For a bit extra reasoning power, I call the "people" in the diagram "user roles". This is because, in reality, one person might play both roles. Or there might be one role played by multiple people. To get technical, a "user role" is a role that a person takes on when using the system.


Are there any guidelines for drawing one?
In fact, yes. The use case model above is pretty damn awful. Here are some guidelines and how they're broken above:
  • A user role describes a role that a particular user takes on when interacting with the system. In the diagram above, the "order taker" role can perform activities relating to manipulating orders in the system and activities related to delivering pizza. This is bad because when we come to do solution design, we want to be able to support the tasks for a user role on as few screens as possible - and having order delivery support on the same screen as entering new orders would most probably be dumb.
  • The task names like "make pizza" relate to the overarching goals of the user as opposed to the goals from a particular interaction with the system. Much more sensible names would be things like "get next order to make" and "mark order as complete". In other words, the tasks performed at the system boundary are not well defined.
  • The complete set of tasks described is insufficient to achieve the key business scenario - get the right food to the right customers within 30 minutes. To support that scenario we need tasks like "view list of waiting orders".
Here is an updated use case model with the first two points applied:


This is a much more precise model. We're now clearly defining the key functional requirements of the system in a much less ambiguous way. We've also identified three types of user roles that require different types of solution support. For example, the requirements of the order taker, a person who has to deal with a customer who will change their mind are quite different to the requirements of the order maker - a person who doesn't need a high degree of interactivity in any solution. We can reasonably expect that the order taker will become an expert user of the system as their goal is to get the orders into the system whereas, for the order maker, using the system is secondary to their main goal of making the orders!



What kinds of reasoning can we do with them?
There are several kinda of reasoning that we can now do. 

The first is identifying missing functionality. Given that our key business scenario is "get the right food to the right customers within 30 minutes", we might want to add in a "order delay manager" user role and use cases that let them examine the order queue and assign staff to particular parts of the shop. We might want to add in basic cash management and accounting functions - or even inventory control functions. 

The second is around scope and prioritization. With a simple quick diagram we can have a conversation with our customers about how they view the system being used. We can then talk about which functions are the most important - and if we're doing things in an agile way, we can start building those key functions straight away.


Finally, we can use the diagram as a jumping point for analysis. There will be business rules applying to all functions. For example,  there might be a cost to the customer of canceling an order. However, to read about how to best document these rules - and the process required for describing the requirements for a function - you'll have to wait for the "use case" blog post I've got lined up!