this post was submitted on 03 Dec 2023
89 points (93.2% liked)
Programming
17540 readers
77 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Article text:
Taking baby steps helps us go faster.
Much has been written about this topic, but it comes up so often in pairing that I feel it’s worth repeating.
I’ll illustrate why with an example from a different domain: recording music. As an amateur guitar player, I attempt to make recorded music. Typically, what I do is throw together a skeleton for a song — the basic structure, the chord progressions, melody, and so on — using a single sequenced instrument, like a nice synth patch. That might take me an afternoon for a 5-minute piece of music.
Then I start working out guitar parts — if it’s going to be that style of arrangement — and begin recording them (musos usually call this “tracking”.)
Take a fiddly guitar solo, for example; a 16-bar solo might last 30 seconds at ~120 beats per minute. Easy, you might think to record it in one take. Well, not so much. I’m trying to get the best take possible because it’s metal and standards are high.
I might record the whole solo as one take, but it will take me several takes to get one I’m happy with. And even then, I might really like the performance on take #3 in the first 4 bars, and really like the last 4 bars of take #6, and be happy with the middle 8 from take #1. I can edit them together, it’s a doddle these days, to make one “super take” that’s a keeper.
Every take costs time: at least 30 seconds if I let my audio workstation software loop over those 16 bars writing a new take each time.
To get the takes I’m happy with, it cost me 6 x 30 seconds (3 minutes).
Now, imagine I recorded those takes in 4-bar sections. Each take would last 7.5 seconds. To get the first 4 bars so I’m happy with them, I would need 3 x 7.5 seconds (22.5 seconds). To get the last 4 bars, 6 x 7.5 seconds (45 seconds), and to get the middle 8, just 15 seconds.
So, recording it in 4 bar sections would cost me 1m 22.5 seconds.
Of course, there would be a bit of an overhead to doing smaller takes, but what I tend to find is that — overall — I get the performances I want sooner if I bite off smaller chunks.
A performance purist, of course, would insist that I record the whole thing in one take for every guitar part. And that’s essentially what playing live is. But playing live comes with its own overhead: rehearsal time. When I’m recording takes of guitar parts, I’m essentially also rehearsing them.
The line between rehearsal and performance has been blurred by modern digital recording technology. Having a multitrack studio in my home that I can spend as much time recording in as I want means that I don’t need to be rehearsed to within an inch of my life like we had to be back in the old days when studio time cost real money.
Indeed, the lines between composing, rehearsing, performing, and recording have been completely blurred. And this is much the same as in programming today.
Remember when compilers took ages? Some of us will even remember when compilers ran on big central computers, and you might have to wait 15–30 minutes to find out if your code was syntactically correct (let alone if it worked.)
Those bad old days go some way to explaining the need for much up-front effort in “getting it right”, and fuelled the artificial divide between “designing” and “coding” and “testing” that sadly persists in dev culture today.
The reality now is that I don’t have to go to some computer lab somewhere to book time on a central mainframe, any more than I have to go to a recording studio to book time with their sound engineer. I have unfettered access to the tools, and it costs me very little. So I can experiment. And that’s what programming (and recording music) essentially is, when all’s said and done: an experiment.
Everything we do is an experiment. And experiments can go wrong, so we may have to run them again. And again. And again. Until we get a result we’re happy with.
So biting off small chunks is vital if we’re to make an experimental approach — an iterative approach — work. Because bigger chunks mean longer cycles and longer cycles mean we either have to settle for less — okay, the first four bars aren’t that great, but it’s the least bad take of the 6 we had time for — or we have to spend more time to get enough iterations (movie directors call it “coverage”) to better ensure that we end up with enough of the good stuff.
This is why live performances generally don’t sound as polished as studio performances, and why software built in big chunks tends to take longer and/or not be as good.
In guitar, the more complex and challenging the music, the smaller the steps we should take. I could probably record a blues-rock number in much bigger takes because there’s less to get wrong. Likewise in software, the more there is that can go wrong, the better it is to take baby steps.
It’s a basic probability, really. Guessing a 4-digit number is an order of magnitude easier if we guess one digit at a time.