How Things Break: Hyper-Optimization

Tangible things break for a variety of reasons, but in most cases the causes are easy to understand: 1) a part wore out; 2) a part was defective, or 3) the device was designed to break and be unrepairable to force us to buy a new one, i.e. planned obsolescence.

Systems break for other less identifiable reasons, reasons that are often hidden beneath a superficial veneer of normalcy and stability. When systems break, those on the outside are surprised. Those on the inside are not surprised it broke; they’re surprised it lasted this long because they witnessed the gradual decay, debasement and hollowing out of the system–all the consequences of hyper-optimization.

Optimization is a core mechanism of increasing productivity, efficiency and profits. When we optimize processes, we seek ways to do more with less, reduce waste and quality-control failures, cut costs, and increase market share and profits.

All optimization serves one goal: increase profits, as profits are the Measure of All Things, the one and only measure of success in the global economy.

Optimization reduces complex processes and systems into data that then guides the optimization.

If a steel barrel currently requires seven spot-welds and reducing this to six doesn’t cause the barrel to leak, then the process is optimized by reducing the inputs (materials, resources, energy, labor, capital, etc.) while maintaining the output ( barrels that don’t leak).

Optimization is the process of identifying what works to lower costs, eliminate competition, gain market share, etc., and doing more of what has worked well.

Consolidation is a key factor in optimization. If scattered production facilities are relocated to one transportation hub, costs can be reduced.

Optimization focuses on the present, as profits are measured in the present. Externalities such as the future waste stream are not included in the optimization data because the enterprise is not responsible for those costs.

What happens to a community when the production facility that provided half the jobs is moved to consolidate production is also not included, as what happens to the community does not affect profits.

Optimization considers risks and returns. If the probabilities of disruption are low, then the process optimizes normalcy: since the vast majority of the time conditions are stable, then the systemic “insurance” (redundant production facilities, warehousing spare parts, etc.) against disruption can be reduced as unnecessary expenses. This optimization boosts profits.

Optimization cuts corners in ways that are not readily visible to those who weren’t engaged in the decision-making process of choosing which data would be collected as the key metrics used to further optimize yields, gains or results–all versions of the same thing.

The choices made about what would be measured and collected for analysis may have seemed obvious, but what the choices left out of the optimization process are not just less obvious; they may be invisible until the system breaks down. By then it’s too late.

This reliance on–indeed, worship of–data as the essential foundation of optimization leads to the promotion of a “bean counter” methodology and value system, in which those who massage the data become the leaders not just of the optimization process but of the system.

Tangible objects are relatively straightforward to repair or replace, or if no repair or replacement is possible, bypass or patch with a kludgy fix–the equivalent of duct-taping it together until a more permanent fix becomes available.

Systems are not quite so forgiving, as they are complex and emergent, meaning the entire assembly of parts and subsystems generates effects that aren’t predictable because the assembly generates new effects that can’t be predicted from the attributes of each part / subsystem.

Systems are also prone to phase shifts from linear states (predictable chains of causality) to non-linear states in which stability abruptly shifts into instability and chaotic behaviors that don’t respond to the usual set of controls.

Optimization is thus prone to the illusions of precision and predictability, which lend themselves to dismissing the probabilities of disruptive externalities (the famous Black Swans) or chaotic breakdowns arising from apparently low-scale failures.

Humans are part of systems, and the desire to eliminate them as non-optimal bits that can be replaced by low-cost, optimized algorithms is natural–but also deeply flawed.

Humans are also complex systems, and so reducing their performance, motivations and incentives to data points fails to capture the essence of their roles in the system.

To understand all these inherent limits, vulnerabilities and points of failure in the processes of optimization, let’s consider some examples.

CHS NOTE: It would be nice to be a Trustafarian or the recipient of a 3-letter agency Black Budget line item, but alas, writing is my only paid work/job. Who knows, something posted here may be actionable and change your life in some useful way. I am grateful for your readership and blessed by your financial support.


Read more

Similar Posts