Skip links

Thinking In Someone Else’s Box

That tweet by MClare got me to read CAD’s Boring Future and Why it’s Exciting by Daniel Davis, which got me thinking about the topics. There are at least three, and probably more. There’s his graph of commonality versus uniqueness, there’s his discussion of how changes in CAD reflect and change that graph, and there’s my instinctive response to try to fit that into my experience at work. I make no apologies for the last, because blog posts are one of the last bastions of pure self-centeredness.

I could quibble about details of the graph, but I basically agree with it. You can make a version of this graph for anything human-made, from buildings to cars to meals to novels to songs and so on. To some degree, it’s even tautological: the more unique something is, the rarer it is. Said like that it sounds ridiculous, but if you put it in terms of an example it makes sense. The more that a building’s use and basic geometry conform to standard, the more common that examples of that building will be. I was worried at one point, early in my career, that I might get stuck designing the same six-story suburban office building over and over again.

The main thrust of Davis’s article – in my opinion – is that new versions of computer aided design will put more emphasis on the “aided” part, by using algorithms specialized to individual building types to help generate designs. In short, he’s saying that most buildings are not unique (per the chart) and there’s no reason to be starting from scratch when designing in the “fat middle.” I agree with this idea less because it seems to me to be a self-fulfilling prophecy. If you start by saying the design will be similar to all the others in its class (by occupancy, as he does, or by lot size or the number of occupants or any other metric you care to use) and use software based on that premise to help you, you will design buildings that are very much like everything else in the class. I do believe that there are new ideas in architecture sometimes, and working in this manner seems like a good way to shut them out.

Finally, how does that algorithmic future relate to old-building work? Not very well. Davis includes a picture of some brownstones, probably in Brooklyn, in his article; I’ve counted with a group from Harlem. His point was that generative design could easily have produced brownstones or tenements because there are so many that were built so similarly. That’s true as far as it goes, although my objection about designs ever changing for the better still stands. But individual buildings change over time as people modify them for different purposes and with different ideas about what is best. Even general details – Is there a stoop or not? Is there a rear-yard extension? Is the basement full height? – will vary between buildings in a single row and can’t be predicted by the pattern. This stumbling block resembles one concerning the use of robots in construction. Robots have been successfully introduced into factories because they are very good at repetitive tasks; the problem with construction sites is that even when the tasks remain the same, the context keeps changing. The problem with existing building work is far worse: the problems aren’t always the same. As soon as you need a human to help the robot understand how each task is different, the work gets done faster when done by the human. Maybe, probably, that will change over time, but the very slow progress of self-driving cars, which have to react to a similar issue of an ever-changing context, suggests that change is not coming soon.

There is already a fair amount of differentiation between new-building design and existing-building alteration design. If there were not, there would be no need for an existing building code. If Davis’s vision of the future is correct, that differentiation is likely to increase.

Tags: