Shell Script Composition vs Make: When Each One Actually Shines
Most automation workflows fall into one of two buckets:
- Workflows I design myself
- Wrapping external tools with annoying CLI flags
These two categories benefit from very different approaches. After years of doing this across many projects — deployments, testing, local runs, build flows — my preference is now very clear:
Use shell scripts for composition. Use Make for presets.
This article explains why.
1. Two Legitimate Models of Automation
Both shell scripts and Makefiles can automate anything. But their mental models are completely different:
-
Shell = composition: Small scripts call other scripts. Workflow flows in a natural “step → step → step” way.
-
Make = presets: Short, memorable task names that expand into long, annoying commands you don’t want to type.
Once you understand this division, the right tool becomes obvious.
2. Why Shell Scripts Are My Default Task Runner
2.1. Compositional Advantages
A shell script is just a tiny function.
A script that calls other scripts is just a composition of functions.
This means:
- Each file does one clear thing.
- Higher-level scripts reuse lower-level ones.
- The workflow is 100% transparent.
Open deploy.sh and you instantly see:
source creds_get.sh
./deploy_core.sh
source creds_shred.sh
No DSL. No switching mental contexts. Everything is bash.
2.2. Organic Growth
This is the part that Make can’t match.
As my needs grow, my scripts grow naturally:
- Need a “build only”? → create
build.sh - Need a “deploy only”? → create
deploy_only.sh - Have an analysis script that runs and persists a report? -> create it
update_progress.sh - Want a single script that does everything? → wrap them together in
build_and_deploy.sh
Automation grows by decomposing and composing scripts — not by rewriting a central Makefile every time I add a new idea.
2.3. No Flags Needed
When I don’t want to remember flags, I don’t retrofit them into a single mega-script with 15 command-line options.
I just create another tiny wrapper script:
deploy.sh
deploy_only.sh
build_and_deploy.sh
local_run.sh
Each one is:
- easy to read
- easy to maintain
- easy to find
- self-documenting through its filename
2.4. Perfect for Workflows I Own
Shell scripts shine when the workflow is under my control:
- Deployment
- Credentials get/shred
- Build flows
- Local run harnesses
- Multi-step automation where steps are meaningful units
This is where composition > presets.
3. Where Make Actually Becomes Useful
Here’s the surprising part:
I don’t use Make for my workflows.
I use Make only for one thing:
Turning a long, annoying third-party command into a short, memorable preset.
3.1. Make as a Preset Layer for External Tools
Examples:
- pytest
- coverage
- linters and type checkers
- docker or docker-compose commands
- JavaScript tools with huge flag lists
- any CLI where I don’t want to remember 6 positional arguments
Take pytest, which can get ridiculous:
poetry run pytest \
tests/unit/some/long/path/test_x.py::TestClass::test_method \
-m "not network and not slow" \
--maxfail=1
Make flattens that to:
make <some name to identify this pattern>
3.2. Make for Curried Commands
Make targets are essentially:
Curried versions of an external CLI (preset arguments baked in)
Instead of:
poetry run pytest -m "integration" --maxfail=1
I do:
make test-integration
I don’t want to memorize flags. I don’t want to memorize marker syntax. I don’t want to memorize file::class::method paths.
Make solves all of that in one stroke.
3.3. Make for Quick “Task Menus”
A Makefile doubles as a quick menu:
make
→ shows all your presets, without writing a custom bash dispatcher.
This is especially useful for:
test-localtest-networktest-integrationtest-file FILE=tests/foo/test_bar.py::test_baz
3.4. What Make is Not for (in my world)
- Not for deployments
- Not for credentials wrappers
- Not for multi-step shell processes
- Not for build pipelines
- Not for orchestrating flows I wrote myself
Make is only for third-party tool ergonomics.
It is not my “automation engine”.
4. The Clean Rule That Emerged Over Time
Use shell scripts when:
- The logic belongs to me.
- I want transparency.
- I want composability.
- I want a directory full of meaningful, callable steps.
- I want new cases to appear by simply creating new scripts.
Use Make when:
- The underlying CLI is third-party.
- The CLI has lots of flags I don’t want to remember.
- I want short, memorable entrypoints.
- I want a discoverable menu of presets.
- I want to avoid copying long commands from shell history.
The dividing rule:
**Shell for composition.
Make for presets.**
This single rule has kept all my automation clean, scalable, and brain-friendly.
5. Canonical Example: Using Make for Pytest
Makefile:
.PHONY: test-local test-network test-integration test-file
test-local:
poetry run pytest -m "not network and not integration" tests
test-network:
poetry run pytest -m "network" tests
test-integration:
poetry run pytest -m "integration" tests
test-file:
poetry run pytest $(FILE)
Now I run:
make test-local
make test-network
make test-file FILE=tests/foo/test_bar.py::TestClass::test_method
Clean. Zero memorization of long pytest stuff.
6. Canonical Example: Shell Scripts for Deploy Flows
Folder:
scripts/
creds_get.sh
creds_shred.sh
build.sh
deploy_core.sh
deploy_only.sh
build_and_deploy.sh
Orchestration:
./scripts/build_and_deploy.sh
This is purely compositional:
- steps
- layers
- transparency
- no central file to maintain
This is where shell is simply better.
8. The Other Advantage of Make/Just: Built-In Discoverability
There’s one more point worth noting, because it does matter in practice — even though it isn’t the core reason I use Make.
Make, Just, Taskfile, etc. give you:
A free, always-up-to-date list of every command you have.
This matters because the alternative in pure bash is clunkier:
- You’d have to write your own dispatcher (
task.sh) - You’d need manual parsing (
case "$1" in …) - You’d need a
--helpflag - You’d need to maintain that by hand every time you add/remove a script
- You’d now have two sources of truth (bash logic + the README)
Make/Just solve all of this instantly
A simple:
make
or:
just --list
gives you:
- A readable menu of tasks
- Short descriptions
- No code needed to generate it
- No need to remember paths or script names
- No need to maintain a dispatcher script
This is why Make (or Just) scales well when you have 10+ presets — especially for things like:
- Test subsets
- Pre-configured build commands
- Complex “run this tool with these flags” shortcuts
- Variations of scripts that touch the same underlying engine (pytest, coverage, etc.)
Markdown + Shell Scripts Is Still Great
For workflows that are inherently compositional:
- deployment
- credentials
- build flows
- local runs
…it’s simpler to have:
- a directory of small scripts, and
- a short Markdown document describing them (
docs/automation.md, etc.)
That’s low overhead, and you should have that documentation anyway.
But for preset-heavy external tools, Make gives you:
- discoverability
- grouping
- examples
- help output
- no dispatcher maintenance
- no custom argument parsing
All essentially “for free”.
8. Conclusion
Both tools are great — when you use them for the right things.
-
Shell scripts → Best for building and composing real workflows.
-
Make → Best for shortening complex third-party commands into simple presets.
This framework has made all my projects cleaner and more ergonomic. It avoids over-engineering, avoids duplication, and keeps automation aligned with how I naturally think about tasks.
If you work the way I do, this division will make your automation dramatically simpler.