Using a Makefile with .PHONY-only targets? Use a run.sh script instead

April 2021 ∙ four minute read ∙

I recently discovered a neat pattern:

When you have a Makefile that only has .PHONY targets, you can turn it into a shell script with functions, and dispatch to them by adding "$@" at the end.

It makes things easier to read and write, allows passing arguments to the "targets", and enables reuse both inside and outside the script.

This is not my idea, but I think it's quite cool, and thought others might too. Here's the article that sold me on it; it discusses the benefits in more detail and links to other projects that use it.

Why have a Makefile in the first place? #

I've been using a Makefile in my Python feed reader library to get convenient shortcuts for common development stuff: install dependencies, run tests, etc. In time, I ended up using some of the targets in CI, and mentioning them in the developer docs.

(I originally took this pattern from Flask, although they stopped using it after 1.0.)

Here's an abridged version to give you a taste (full Makefile here):

.PHONY: test typing

test:
    pytest --runslow

# mypy does not work on pypy as of January 2020
typing:
    test $$( python -c 'import sys; print(sys.implementation.name)' ) = pypy \
    && echo "mypy does not work on pypy, doing nothing" \
    || mypy --strict src

For me, this has two main downsides:

  • There's no way to pass arguments to the targets, for example to call pytest -v while also getting the "default" --runslow option. (In this case, I could have used the addopts config key – but I don't want to force everyone to use --runslow, I just want to show it's the recommended way.)
  • It makes it harder to write fully-featured scripts; it is possible, but the result tends to be less readable.

Enter run.sh #

We could re-write that as a shell script; let's call it run.sh:

#!/bin/bash

function test {
    pytest --runslow "$@"
}

function typing {
    local impl=$( python -c 'import sys; print(sys.implementation.name)' )

    # mypy does not work on pypy as of January 2020
    if [[ $impl == pypy ]]; then
        echo "mypy does not work on pypy, doing nothing"
        return
    fi

    mypy --strict src "$@"
}

"$@"

The $@ at the end dispatches the script arguments to a function (so ./run.sh test calls test); the $@ in test passes the remaining arguments along (so ./run.sh test -v ends up running pytest --runslow -v).

Why I think it's cool #

Executable documentation #

A script is a simple way of documenting project-specific development tools – with a bit of care, it becomes executable documentation; this is a huge benefit that's highlighted in the original article as well.

I'm strongly considering adding more comments to my run.sh, and including it directly in the developer docs, instead of the written documentation.

Most commands are self-evident, and if you want to run something in a different way, you can copy-paste it directly into a terminal (not straightforward with a Makefile). Hell, you can even source it if you're using a compatible shell, and have a sort of "project shell".

Reusability #

Let's look at an example. I run coverage in three ways:

  • for development, with HTML reports and contexts ("who tests what")
  • for testing across Python versions/interpreters, with tox; contexts could be useful, but they increase run time
  • for continuous integration1; contexts are not needed

All cases should fail if coverage for specific modules is below 100%.

run.sh makes it possible to skip contexts when running under tox/CI, which reduced CI run time by 10-30%. Also, it avoids duplicating some pretty hairy commands.

Now, the developer-facing coverage-all command looks like this:

function coverage-all {
    coverage-run --cov-context=test "$@"
    coverage-report --show-contexts
}

... the tox.ini commands look like this:

[testenv]
commands = ./run.sh coverage-run --cov-append

[testenv:coverage-report]
commands = ./run.sh coverage-report

[testenv:typing]
commands = ./run.sh typing

... and CI calls a function that looks like this:

function ci-run {
    coverage-run && coverage-report && typing
}

This reusability extends to using the functions anywhere commands are expected:

timeout 5 ./run.sh myfunction

... including inside the script itself (the original article calls this $0-dispatch):

function typing-dev {
    find src -name '*.py' | entr -cdr "$0" typing "$@"
}

Here, entr takes a command (and its arguments), and runs it every time a Python file in src changes. Note that we use $0 to dispatch to the script's typing "target".


You can find reader's full run.sh here; in addition to the things above, it has:

  • more complex examples
  • a workaround to make $0-dispatch work when called with bash run.sh
  • a wrapper for using entr with git ls-files, based on this pattern from Julia Evans

That's it for now.

Learned something new today? Share this with others, it really helps!

  1. I could probably use tox for CI as well, like Flask does lately. [return]