Links

Scaling acceptance tests

This chapter is a follow-up to Intro to acceptance tests. You can find the finished code for this chapter on GitHub.
Acceptance tests are essential, and they directly impact your ability to confidently evolve your system over time, with a reasonable cost of change.
They're also a fantastic tool to help you work with legacy code. When faced with a poor codebase without any tests, please resist the temptation to start refactoring. Instead, write some acceptance tests to give you a safety net to freely change the system's internals without affecting its functional external behaviour. ATs need not be concerned with internal quality, so they're a great fit in these situations.
After reading this, you'll appreciate that acceptance tests are useful for verification and can also be used in the development process by helping us change our system more deliberately and methodically, reducing wasted effort.

Prerequisite material

The inspiration for this chapter is borne of many years of frustration with acceptance tests. Two videos I would recommend you watch are:
"Growing Object Oriented Software" (GOOS) is such an important book for many software engineers, including myself. The approach it prescribes is the one I coach engineers I work with to follow.
  • GOOS - Nat Pryce & Steve Freeman
Finally, Riya Dattani and I spoke about this topic in the context of BDD in our talk, Acceptance tests, BDD and Go.

Recap

We're talking about "black-box" tests that verify your system behaves as expected from the outside, from a "business perspective". The tests do not have access to the innards of the system it tests; they're only concerned with what your system does rather than how.

Anatomy of bad acceptance tests

Over many years, I've worked for several companies and teams. Each of them recognised the need for acceptance tests; some way to test a system from a user's point of view and to verify it works how it's intended, but almost without exception, the cost of these tests became a real problem for the team.
  • Slow to run
  • Brittle
  • Flaky
  • Expensive to maintain, and seem to make changing the software harder than it ought to be
  • Can only run in a particular environment, causing slow and poor feedback loops
Let's say you intend to write an acceptance test around a website you're building. You decide to use a headless web browser (like Selenium) to simulate a user clicking buttons on your website to verify it does what it needs to do.
Over time, your website's markup has to change as new features are discovered, and engineers bike-shed over whether something should be an <article> or a <section> for the billionth time.
Even though your team are only making minor changes to the system, barely noticeable to the actual user, you find yourself wasting lots of time updating your ATs.

Tight-coupling

Think about what prompts acceptance tests to change:
  • An external behaviour change. If you want to change what the system does, changing the acceptance test suite seems reasonable, if not desirable.
  • An implementation detail change / refactoring. Ideally, this shouldn't prompt a change, or if it does, a minor one.
Too often, though, the latter is the reason acceptance tests have to change. To the point where engineers even become reluctant to change their system because of the perceived effort of updating tests!
Riya and myself talking about separating concerns in our tests
These problems stem from not applying well-established and practised engineering habits written by the authors mentioned above. You can't write acceptance tests like unit tests; they require more thought and different practices.

Anatomy of good acceptance tests

If we want acceptance tests that only change when we change behaviour and not implementation detail, it stands to reason that we need to separate those concerns.

On types of complexity

As software engineers, we have to deal with two kinds of complexity.
  • Accidental complexity is the complexity we have to deal with because we're working with computers, stuff like networks, disks, APIs, etc.
  • Essential complexity is sometimes referred to as "domain logic". It's the particular rules and truths within your domain.
    • For example, "if an account owner withdraws more money than is available, they are overdrawn". This statement says nothing about computers; this statement was true before computers were even used in banks!
Essential complexity should be expressible to a non-technical person, and it's valuable to have modelled it in our "domain" code, and in our acceptance tests.

Separation of concerns

What Dave Farley proposed in the video earlier, and what Riya and I also discussed, is we should have the idea of specifications. Specifications describe the behaviour of the system we want without being coupled with accidental complexity or implementation detail.
This idea should feel reasonable to you. In production code, we frequently strive to separate concerns and decouple units of work. Would you not hesitate to introduce an interface to allow your HTTP handler to decouple it from non-HTTP concerns? Let's take this same line of thinking for our acceptance tests.
Dave Farley describes a specific structure.
Dave Farley on Acceptance Tests
At GopherconUK, Riya and I put this in Go terms.
Separation of concerns

Testing on steroids

Decoupling how the specification is executed allows us to reuse it in different scenarios. We can:

Make our drivers configurable

This means you can run your ATs locally, in your staging and (ideally) production environments.
  • Too many teams engineer their systems such that acceptance tests are impossible to run locally. This introduces an intolerably slow feedback loop. Wouldn't you rather be confident your ATs will pass before integrating your code? If the tests start breaking, is it acceptable that you'd be unable to reproduce the failure locally and instead, have to commit changes and cross your fingers that it'll pass 20 minutes later in a different environment?
  • Remember, just because your tests pass in staging doesn't mean your system will work. Dev/Prod parity is, at best, a white lie. I test in prod.
  • There are always differences between the environments that can affect the behaviour of your system. A CDN could have some cache headers incorrectly set; a downstream service you depend on may behave differently; a configuration value may be incorrect. But wouldn't it be nice if you could run your specifications in prod to catch these problems quickly?

Plug in different drivers to test other parts of your system

This flexibility allows us to test behaviours at different abstraction and architectural layers, which allows us to have more focused tests beyond black-box tests.
  • For instance, you may have a web page with an API behind it. Why not use the same specification to test both? You can use a headless web browser for the web page, and HTTP calls for the API.
  • Taking this idea further, ideally, we want the code to model essential complexity (as "domain" code) so we should also be able to use our specifications for unit tests. This will give swift feedback that the essential complexity in our system is modelled and behaves correctly.

Acceptance tests changing for the right reasons

With this approach, the only reason for your specifications to change is if the behaviour of the system changes, which is reasonable.
  • If your HTTP API has to change, you have one obvious place to update it, the driver.
  • If your markup changes, again, update the specific driver.
As your system grows, you'll find yourself reusing drivers for multiple tests, which again means if implementation detail changes, you only have to update one, usually obvious place.
When done right, this approach gives us flexibility in our implementation detail and stability in our specifications. Importantly, it provides a simple and obvious structure for managing change, which becomes essential as a system and its team grows.

Acceptance tests as a method for software development

In our talk, Riya and I discussed acceptance tests and their relation to BDD. We talked about how starting your work by trying to understand the problem you're trying to solve and expressing it as a specification helps focus your intent and is a great way to start your work.
I was first introduced to this way of working in GOOS. A while ago, I summarised the ideas on my blog. Here is an extract from my post Why TDD
TDD is focused on letting you design for the behaviour you precisely need, iteratively. When starting a new area, you must identify a key, necessary behaviour and aggressively cut scope.
Follow a "top-down" approach, starting with an acceptance test (AT) that exercises the behaviour from the outside. This will act as a north-star for your efforts. All you should be focused on is making that test pass. This test will likely be failing for a while whilst you develop enough code to make it pass.
Once your AT is set up, you can break into the TDD process to drive out enough units to make the AT pass. The trick is to not worry too much about design at this point; get enough code to make the AT pass because you're still learning and exploring the problem.
Taking this first step is often more extensive than you think, setting up web servers, routing, configuration, etc., which is why keeping the scope of the work small is essential. We want to make that first positive step on our blank canvas and have it backed by a passing AT so we can continue to iterate quickly and safely.
As you develop, listen to your tests, and they should give you signals to help you push your design in a better direction but, again, anchored to the behaviour rather than our imagination.
Typically, your first "unit" that does the hard work to make the AT pass will grow too big to be comfortable, even for this small amount of behaviour. This is when you can start thinking about how to break the problem down and introduce new collaborators.
This is where test doubles (e.g. fakes, mocks) are handy because most of the complexity that lives internally within software doesn't usually reside in implementation detail but "between" the units and how they interact.

The perils of bottom-up

This is a "top-down" approach rather than a "bottom-up". Bottom-up has its uses, but it carries an element of risk. By building "services" and code without it being integrated into your application quickly and without verifying a high-level test, you risk wasting lots of effort on unvalidated ideas.
This is a crucial property of the acceptance-test-driven approach, using tests to get real validation of our code.
Too many times, I've encountered engineers who have made a chunk of code, in isolation, bottom-up, they think is going to solve a job, but it:
  • Doesn't work how we want to
  • Does stuff we don't need
  • Doesn't integrate easily
  • Requires a ton of re-writing anyway
This is waste.

Enough talk, time to code

Unlike other chapters, you'll need Docker installed because we'll be running our applications in containers. It's assumed at this point in the book you're comfortable writing Go code, importing from different packages, etc.
Create a new project with go mod init github.com/quii/go-specs-greet (you can put whatever you like here but if you change the path you will need to change all internal imports to match)
Make a folder specifications to hold our specifications, and add a file greet.go
package specifications
import (
"testing"
"github.com/alecthomas/assert/v2"
)
type Greeter interface {
Greet() (string, error)
}
func GreetSpecification(t testing.TB, greeter Greeter) {
got, err := greeter.Greet()
assert.NoError(t, err)
assert.Equal(t, got, "Hello, world")
}
My IDE (Goland) takes care of the fuss of adding dependencies for me, but if you need to do it manually, you'd do
go get github.com/alecthomas/assert/v2
Given Farley's acceptance test design (Specification->DSL->Driver->System), we now have a decoupled specification from implementation. It doesn't know or care about how we Greet; it's just concerned with the essential complexity of our domain. Admittedly this complexity isn't much right now, but we'll expand upon the spec to add more functionality as we further iterate. It's always important to start small!
You could view the interface as our first step of a DSL; as the project grows, you may find the need to abstract differently, but for now, this is fine.
At this point, this level of ceremony to decouple our specification from implementation might make some people accuse us of "overly abstracting". I promise you that acceptance tests that are too coupled to implementation become a real burden on engineering teams. I am confident that most acceptance tests out in the wild are expensive to maintain due to this inappropriate coupling; rather than the reverse of being overly abstract.
We can use this specification to verify any "system" that can Greet.

First system: HTTP API

We require to provide a "greeter service" over HTTP. So we'll need to create:
  1. 1.
    A driver. In this case, one works with an HTTP system by using an HTTP client. This code will know how to work with our API. Drivers translate DSLs into system-specific calls; in our case, the driver will implement the interface specifications define.
  2. 2.
    An HTTP server with a greet API
  3. 3.
    A test, which is responsible for managing the life-cycle of spinning up the server and then plugging the driver into the specification to run it as a test

Write the test first

The initial process for creating a black-box test that compiles and runs your program, executes the test and then cleans everything up can be quite labour intensive. That's why it's preferable to do it at the start of your project with minimal functionality. I typically start all my projects with a "hello world" server implementation, with all of my tests set up and ready for me to build the actual functionality quickly.
The mental model of "specifications", "drivers", and "acceptance tests" can take a little time to get used to, so follow carefully. It can be helpful to "work backwards" by trying to call the specification first.
Create some structure to house the program we intend to ship.
mkdir -p cmd/httpserver
Inside the new folder, create a new file greeter_server_test.go, and add the following.
package main_test
import (
"testing"
"github.com/quii/specifications"
)
func TestGreeterServer(t *testing.T) {
specifications.GreetSpecification(t, nil)
}
We wish to run our specification in a Go test. We already have access to a *testing.T, so that's the first argument, but what about the second?
specifications.Greeter is an interface, which we will implement with a Driver by changing the new TestGreeterServer code to the following:
func TestGreeterServer(t *testing.T) {
driver := go_specs_greet.Driver{BaseURL: "http://localhost:8080"}
specifications.GreetSpecification(t, driver)
}
It would be favourable for our Driver to be configurable to run it against different environments, including locally, so we have added a BaseURL field.

Try to run the test

./greeter_server_test.go:46:12: undefined: go_specs_greet.Driver
We're still practising TDD here! It's a big first step we have to make; we need to make a few files and write maybe more code than we're typically used to, but when you're first starting, this is often the case. It's so important we try to remember the red step's rules.
Commit as many sins as necessary to get the test passing

Write the minimal amount of code for the test to run and check the failing test output

Hold your nose; remember, we can refactor when the test has passed. Here's the code for the driver in driver.go which we will place in the project root:
package go_specs_greet
import (
"io"
"net/http"
)
type Driver struct {
BaseURL string
}
func (d Driver) Greet() (string, error) {
res, err := http.Get(d.BaseURL + "/greet")
if err != nil {
return "", err
}
defer res.Body.Close()
greeting, err := io.ReadAll(res.Body)
if err != nil {
return "", err
}
return string(greeting), nil
}
Notes:
  • You could argue that I should be writing tests to drive out the various if err != nil, but in my experience, so long as you're not doing anything with the err, tests that say "you return the error you get" are relatively low value.
  • You shouldn't use the default HTTP client. Later we'll pass in an HTTP client to configure it with timeouts etc., but for now, we're just trying to get ourselves to a passing test.
Try and rerun the tests; they should now compile but not pass.
Get "http://localhost:8080/greet": dial tcp [::1]:8080: connect: connection refused
We have a Driver, but we have not started our application yet, so it cannot do an HTTP request. We need our acceptance test to coordinate building, running and finally killing our system for the test to run.

Running our application

It's common for teams to build Docker images of their systems to deploy, so for our test we'll do the same
To help us use Docker in our tests, we will use Testcontainers. Testcontainers gives us a programmatic way to build Docker images and manage container life-cycles.
go get github.com/testcontainers/testcontainers-go
Now you can edit cmd/httpserver/greeter_server_test.go to read as follows:
package main_test
import (
"context"
"testing"
"github.com/alecthomas/assert/v2"
go_specs_greet "github.com/quii/go-specs-greet"
"github.com/quii/go-specs-greet/specifications"
"github.com/testcontainers/testcontainers-go"
"github.com/testcontainers/testcontainers-go/wait"
)
func TestGreeterServer(t *testing.T) {
ctx := context.Background()
req := testcontainers.ContainerRequest{
FromDockerfile: testcontainers.FromDockerfile{
Context: "../../.",
Dockerfile: "./cmd/httpserver/Dockerfile",
// set to false if you want less spam, but this is helpful if you're having troubles
PrintBuildLog: true,
},
ExposedPorts: []string{"8080:8080"},
WaitingFor: wait.ForHTTP("/").WithPort("8080"),
}
container, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{
ContainerRequest: req,
Started: true,
})
assert.NoError(t, err)
t.Cleanup(func() {
assert.NoError(t, container.Terminate(ctx))
})
driver := go_specs_greet.Driver{BaseURL: "http://localhost:8080"}
specifications.GreetSpecification(t, driver)
}
Try and run the test.
=== RUN TestGreeterHandler
2022/09/10 18:49:44 Starting container id: 03e8588a1be4 image: docker.io/testcontainers/ryuk:0.3.3
2022/09/10 18:49:45 Waiting for container id 03e8588a1be4 image: docker.io/testcontainers/ryuk:0.3.3
2022/09/10 18:49:45 Container is ready id: 03e8588a1be4 image: docker.io/testcontainers/ryuk:0.3.3
greeter_server_test.go:32: Did not expect an error but got:
Error response from daemon: Cannot locate specified Dockerfile: ./cmd/httpserver/Dockerfile: failed to create container
--- FAIL: TestGreeterHandler (0.59s)
We need to create a Dockerfile for our program. Inside our httpserver folder, create a Dockerfile and add the following.
FROM golang:1.18-alpine
WORKDIR /app
COPY go.mod ./
RUN go mod download
COPY . .
RUN go build -o svr cmd/httpserver/*.go
EXPOSE 8080
CMD [ "./svr" ]
Don't worry too much about the details here; it can be refined and optimised, but for this example, it'll suffice. The advantage of our approach here is we can later improve our Dockerfile and have a test to prove it works as we intend it to. This is a real strength of having black-box tests!
Try and rerun the test; it should complain about not being able to build the image. Of course, that's because we haven't written a program to build yet!
For the test to fully execute, we'll need to create a program that listens on 8080, but that's all. Stick to the TDD discipline, don't write the production code that would make the test pass until we've verified the test fails as we'd expect.
Create a main.go inside our httpserver folder with the following
package main
import (
"log"
"net/http"
)
func main() {
handler := http.HandlerFunc(func(writer http.ResponseWriter, request *http.Request) {
})
if err := http.ListenAndServe(":8080", handler); err != nil {
log.Fatal(err)
}
}
Try to run the test again, and it should fail with the following.
greet.go:16: Expected values to be equal:
+Hello, World
\ No newline at end of file
--- FAIL: TestGreeterHandler (2.09s)

Write enough code to make it pass

Update the handler to behave how our specification wants it to
func main() {
handler := http.HandlerFunc(func(w http.ResponseWriter, _ *http.Request) {
fmt.Fprint(w, "Hello, world")
})
if err := http.ListenAndServe(":8080", handler); err != nil {
log.Fatal(err)
}
}

Refactor

Whilst this technically isn't a refactor, we shouldn't rely on the default HTTP client, so let's change our Driver, so we can supply one, which our test will give.
type Driver struct {
BaseURL string
Client *http.Client
}
func (d Driver) Greet() (string, error) {
res, err := d.Client.Get(d.BaseURL + "/greet")
if err != nil {
return "", err
}
defer res.Body.Close()
greeting, err := io.ReadAll(res.Body)
if err != nil {
return "", err
}
return string(greeting), nil
}
In our test in cmd/httpserver/greeter_server_test.go, update the creation of the driver to pass in a client.
client := http.Client{
Timeout: 1 * time.Second,
}
driver := go_specs_greet.Driver{BaseURL: "http://localhost:8080", Client: &client}
specifications.GreetSpecification(t, driver)
It's good practice to keep main.go as simple as possible; it should only be concerned with piecing together the building blocks you make into an application.
Create a file in the project root called handler.go and move our code into there.
package go_specs_greet
import (
"fmt"
"net/http"
)
func Handler(w http.ResponseWriter, r *http.Request) {
fmt.Fprint(w, "Hello, world")
}
Update main.go to import and use the handler instead.
package main
import (
"net/http"
go_specs_greet "github.com/quii/go-specs-greet"
)
func main() {
handler := http.HandlerFunc(go_specs_greet.Handler)
http.ListenAndServe(":8080", handler)
}

Reflect

The first step felt like an effort. We've made several go files to create and test an HTTP handler that returns a hard-coded string. This "iteration 0" ceremony and setup will serve us well for further iterations.
Changing functionality should be simple and controlled by driving it through the specification and dealing with whatever changes it forces us to make. Now the DockerFile and testcontainers are set up for our acceptance test; we shouldn't have to change these files unless the way we construct our application changes.
We'll see this with our following requirement, greet a particular person.

Write the test first

Edit our specification
package specifications
import (
"testing"
"github.com/alecthomas/assert/v2"
)
type Greeter interface {
Greet(name string) (string, error)
}
func GreetSpecification(t testing.TB, greeter Greeter) {
got, err := greeter.Greet("Mike")
assert.NoError(t, err)
assert.Equal(t, got, "Hello, Mike")
}
To allow us to greet specific people, we need to change the interface to our system to accept a name parameter.

Try to run the test

./greeter_server_test.go:48:39: cannot use driver (variable of type go_specs_greet.Driver) as type specifications.Greeter in argument to specifications.GreetSpecification:
go_specs_greet.Driver does not implement specifications.Greeter (wrong type for Greet method)
have Greet() (string, error)
want Greet(name string) (string, error)
The change in the specification has meant our driver needs to be updated.

Write the minimal amount of code for the test to run and check the failing test output

Update the driver so it specifies a name query value in the request to ask for a particular name to be greeted.
func (d Driver) Greet(name string) (string, error) {
res, err := d.Client.Get(d.BaseURL + "/greet?name=" + name)
if err != nil {
return "", err
}
defer res.Body.Close()
greeting, err := io.ReadAll(res.Body)
if err != nil {
return "", err
}
return string(greeting), nil
}
The test should now run, and fail.
greet.go:16: Expected values to be equal:
-Hello, world
\ No newline at end of file
+Hello, Mike
\ No newline at end of file
--- FAIL: TestGreeterHandler (1.92s)

Write enough code to make it pass

Extract the name from the request and greet.
func Handler(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Hello, %s", r.URL.Query().Get("name"))
}
The test should now pass.

Refactor

In HTTP Handlers Revisited, we discussed how important it is for HTTP handlers should only be responsible for handling HTTP concerns; any "domain logic" should live outside of the handler. This allows us to develop domain logic in isolation from HTTP, making it simpler to test and understand.
Let's pull apart these concerns.
Update our handler in ./handler.go as follows:
func Handler(w http.ResponseWriter, r *http.Request) {
name := r.URL.Query().Get("name")
fmt.Fprint(w, Greet(name))
}
Create new file ./greet.go:
package go_specs_greet
import "fmt"
func Greet(name string) string {
return fmt.Sprintf("Hello, %s", name)
}

A slight diversion in to the "adapter" design pattern

Now that we've separated our domain logic of greeting people into a separate function, we are now free to write unit tests for our greet function. This is undoubtedly a lot simpler than testing it through a specification that goes through a driver that hits a web server, to get a string!
Wouldn't it be nice if we could reuse our specification here too? After all, the specification's point is decoupled from implementation details. If the specification captures our essential complexity and our "domain" code is supposed to model it, we should be able to use them together.
Let's give it a go by creating ./greet_test.go as follows:
package go_specs_greet_test
import (
"testing"
"github.com/quii/go-specs-greet/specifications"
go_specs_greet "github.com/quii/go-specs-greet"
)
func TestGreet(t *testing.T) {
specifications.GreetSpecification(t, go_specs_greet.Greet)
}
This would be nice, but it doesn't work
./greet_test.go:11:39: cannot use go_specs_greet.Greet (value of type func(name string) string) as type specifications.Greeter in argument to specifications.GreetSpecification:
func(name string) string does not implement specifications.Greeter (missing Greet method)
Our specification wants something that has a method Greet() not a function.
The compilation error is frustrating; we have a thing that we "know" is a Greeter, but it's not quite in the right shape for the compiler to let us use it. This is what the adapter pattern caters for.
In software engineering, the adapter pattern is a software design pattern (also known as wrapper, an alternative naming shared with the decorator pattern) that allows the interface of an existing class to be used as another interface.[1] It is often used to make existing classes work with others without modifying their source code.
A lot of fancy words for something relatively simple, which is often the case with design patterns, which is why people tend to roll their eyes at them. The value of design patterns is not specific implementations but a language to describe specific solutions to common problems engineers face. If you have a team that has a shared vocabulary, it reduces the friction in communication.
Add this code in ./specifications/adapters.go
type GreetAdapter func(name string) string
func (g GreetAdapter) Greet(name string) (string, error) {
return g(name), nil
}
We can now use our adapter in our test to plug our Greet function into the specification.
package main_test
import (
"testing"
"github.com/quii/go-specs-greet/specifications"
gospecsgreet "github.com/quii/go-specs-greet"
)
func TestGreet(t *testing.T) {
specifications.GreetSpecification(
t,
specifications.GreetAdapter(gospecsgreet.Greet),
)
}
The adapter pattern is handy when you have a type that exhibits the behaviour that an interface wants, but isn't in the right shape.

Reflect

The behaviour change felt simple, right? OK, maybe it was simply due to the nature of the problem, but this method of work gives you discipline and a simple, repeatable way of changing your system from top to bottom:
  • Analyse your problem and identify a slight improvement to your system that pushes you in the right direction
  • Capture the new essential complexity in a specification
  • Follow the compilation errors until the AT runs
  • Update your implementation to make the system behave according to the specification
  • Refactor
After the pain of the first iteration, we didn't have to edit our acceptance test code because we have the separation of specifications, drivers and implementation. Changing our specification required us to update our driver and finally our implementation, but the boilerplate code around how to spin up the system as a container was unaffected.
Even with the overhead of building a docker image for our application and spinning up the container, the feedback loop for testing our entire application is very tight:
[email protected] go-specs-greet % go test ./...
ok github.com/quii/go-specs-greet 0.181s
ok github.com/quii/go-specs-greet/cmd/httpserver 2.221s
? github.com/quii/go-specs-greet/specifications [no test files]
Now, imagine your CTO has now decided that gRPC is the future. She wants you to expose this same functionality over a gRPC server whilst maintaining the existing HTTP server.
This is an example of accidental complexity. Remember, accidental complexity is the complexity we have to deal with because we're working with computers, stuff like networks, disks, APIs, etc. The essential complexity has not changed, so we shouldn't have to change our specifications.
Many repository structures and design patterns are mainly dealing with separating types of complexity. For instance, "ports and adapters" ask that you separate your domain code from anything to do with accidental complexity; that code lives in an "adapters" folder.

Making the change easy

Sometimes, it makes sense to do some refactoring before making a change.
First make the change easy, then make the easy change
~Kent Beck
For that reason, let's move our http code - driver.go and handler.go - into a package called httpserver within an adapters folder and change their package names to httpserver.
You'll now need to import the root package into handler.go to refer to the Greet method...
package httpserver
import (
go_specs_greet "github.com/quii/go-specs-greet"
)
func Handler(w http.ResponseWriter, r *http.Request) {
name := r.URL.Query().Get("name")
fmt.Fprint(w, go_specs_greet.Greet(name))
}
import your httpserver adapater into main.go:
package main
import (
"net/http"
"github.com/quii/go-specs-greet/adapters/httpserver"
)
func main() {
handler := http.HandlerFunc(httpserver.Handler)
http.ListenAndServe(":8080", handler)
}
and update the import and reference to Driver in greeter_server_test.go:
import (
...
"github.com/quii/go-specs-greet/adapters/httpserver"
...
)
...
driver := httpserver.Driver{BaseURL: "http://localhost:8080", Client: &client}
...
Finally, it's helpful to gather our domain level code in to its own folder too. Don't be lazy and have a domain folder in your projects with hundreds of unrelated types and functions. Make an effort to think about your domain and group ideas that belong together, together. This will make your project easier to understand and will improve the quality of your imports.
Rather than seeing
domain.Greet
Which is just a bit weird, instead favour
interactions.Greet
Create a domain folder to house all your domain code, and within it, an interactions folder. Depending on your tooling, you may have to update some imports and code.
Our project tree should now look like this:
[email protected] go-specs-greet % tree
.
├── Dockerfile
├── Makefile
├── README.md
├── adapters
│ └── httpserver
│ ├── driver.go
│ └── handler.go
├── cmd
│ └── httpserver
│ ├── greeter_server_test.go
│ └── main.go
├── domain
│ └── interactions
│ ├── greet.go
│ └── greet_test.go
├── go.mod
├── go.sum
└── specifications
└── greet.go
Our domain code, essential complexity, lives at the root of our go module, and code that will allow us to use them in "the real world" are organised into adapters. The cmd folder is where we can compose these logical groupings into practical applications, which have black-box tests to verify it all works. Nice!
Finally, we can do a tiny bit of tidying up our acceptance test. If you consider the high-level steps of our acceptance test:
  • Build a docker image
  • Wait for it to be listening on some port
  • Create a driver that understands how to translate the DSL into system specific calls
  • Plug in the driver into the specification
... you'll realise we have the same requirements for an acceptance test for the gRPC server!
The adapters folder seems a good place as any, so inside a file called docker.go, encapsulate the first two steps in a function that we'll reuse next.
package adapters
import (
"context"
"fmt"
"testing"
"time"
"github.com/alecthomas/assert/v2"
"github.com/docker/go-connections/nat"
"github.com/testcontainers/testcontainers-go"
"github.com/testcontainers/testcontainers-go/wait"
)
func StartDockerServer(
t testing.TB,
port string,
dockerFilePath string,
) {
ctx := context.Background()
t.Helper()
req := testcontainers.ContainerRequest{
FromDockerfile: testcontainers.FromDockerfile{
Context: "../../.",
Dockerfile: dockerFilePath,
PrintBuildLog: true,
},
ExposedPorts: []string{fmt.Sprintf("%s:%s", port, port)},
WaitingFor: wait.ForListeningPort(nat.Port(port)).WithStartupTimeout(5 * time.Second),
}
container, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{
ContainerRequest: req,
Started: true,
})
assert.NoError(t, err)
t.Cleanup(func() {
assert.NoError(t, container.Terminate(ctx))
})
}
This gives us an opportunity to clean up our acceptance test a little
func TestGreeterServer(t *testing.T) {
var (
port = "8080"
dockerFilePath = "./cmd/httpserver/Dockerfile"
baseURL = fmt.Sprintf("http://localhost:%s", port)
driver = httpserver.Driver{BaseURL: baseURL, Client: &http.Client{
Timeout: 1 * time.Second,
}}
)
adapters.StartDockerServer(t, port, dockerFilePath)
specifications.GreetSpecification(t, driver)
}
This should make writing the next test simpler.

Write the test first

This new functionality can be accomplished by creating a new adapter to interact with our domain code. For that reason we:
  • Shouldn't have to change the specification;
  • Should be able to reuse the specification;
  • Should be able to reuse the domain code.
Create a new folder grpcserver inside cmd to house our new program and the corresponding acceptance test. Inside cmd/grpc_server/greeter_server_test.go, add an acceptance test, which looks very similar to our HTTP server test, not by coincidence but by design.
package main_test
import (
"fmt"
"testing"
"github.com/quii/go-specs-greet/adapters"
"github.com/quii/go-specs-greet/adapters/grpcserver"
"github.com/quii/go-specs-greet/specifications"
)
func TestGreeterServer(t *testing.T) {
var (
port = "50051"
dockerFilePath = "./cmd/grpcserver/Dockerfile"
driver = grpcserver.Driver{Addr: fmt.Sprintf("localhost:%s", port)}
)
adapters.StartDockerServer(t, port, dockerFilePath)
specifications.GreetSpecification(t, &driver)
}
The only differences are:
  • We use a different docker file, because we're building a different program
  • This means we'll need a new Driver, that'll use gRPC to interact with our new program

Try to run the test

./greeter_server_test.go:26:12: undefined: grpcserver
We haven't created a Driver yet, so it won't compile.

Write the minimal amount of code for the test to run and check the failing test output

Create a grpcserver folder inside adapters and inside it create driver.go
package grpcserver
type Driver struct {
Addr string
}
func (d Driver) Greet(name string) (string, error) {
return "", nil
}
If you run again, it should now compile but not pass because we haven't created a Dockerfile and corresponding program to run.
Create a new Dockerfile inside cmd/grpcserver.
FROM golang:1.18-alpine
WORKDIR /app
COPY go.mod ./
RUN go mod download
COPY . .
RUN go build -o svr cmd/grpcserver/*.go
EXPOSE 8080
CMD [ "./svr" ]
And a main.go
package main
import "fmt"
func main() {
fmt.Println("implement me")
}
You should find now that the test fails because our server is not listening on the port. Now is the time to start building our client and server with gRPC.

Write enough code to make it pass

gRPC

If you're unfamiliar with gRPC, I'd start by looking at the gRPC website. Still, for this chapter, it's just another kind of adapter into our system, a way for other systems to call (remote procedure call) our excellent domain code.
The twist is you define a "service definition" using Protocol Buffers. You then generate server and client code from the definition. This not only works for Go but for most mainstream languages too. This means you can share a definition with other teams in your company who may not even write Go and can still do service-to-service communication smoothly.
If you haven't used gRPC before, you'll need to install a Protocol buffer compiler and some Go plugins. The gRPC website has clear instructions on how to do this.
Inside the same folder as our new driver, add a greet.proto file with the following
syntax = "proto3";
option go_package = "github.com/quii/adapters/grpcserver";
package grpcserver;
service Greeter {
rpc Greet (GreetRequest) returns (GreetReply) {}
}
message GreetRequest {
string name = 1;
}
message GreetReply {
string message = 1;
}
To understand this definition, you don't need to be an expert in Protocol Buffers. We define a service with a Greet method and then describe the incoming and outgoing message types.
Inside adapters/grpcserver run the following to generate the client and server code
protoc --go_out=. --go_opt=paths=source_relative \
--go-grpc_out=. --go-grpc_opt=paths=source_relative \
greet.proto
If it worked, we would have some code generated for us to use. Let's start by using the generated client code inside our Driver.
package grpcserver
import (
"context"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
)
type Driver struct {
Addr string
}
func (d Driver) Greet(name string) (string, error) {
//todo: we shouldn't redial every time we call greet, refactor out when we're green
conn, err := grpc.Dial(d.Addr, grpc.WithTransportCredentials(insecure.NewCredentials()))
if err != nil {
return "", err
}
defer conn.Close()
client := NewGreeterClient(conn)
greeting, err := client.Greet(context.Background(), &GreetRequest{
Name: name,
})
if err != nil {
return "", err
}
return greeting.Message, nil
}
Now that we have a client, we need to update our main.go to create a server. Remember, at this point; we're just trying to get our test to pass and not worrying about code quality.
package main
import (
"context"
"log"
"net"
"github.com/quii/go-specs-greet/adapters/grpcserver"
"google.golang.org/grpc"
)
func main() {
lis, err := net.Listen("tcp", ":50051")
if err