If I remember anything correctly about catastrophe mathematics, it describes how slowly and progressively increasing the pressure on a system causes abrupt, “catastrophic” changes to the system state.
However, the textbooks don’t go further than describing a two-dimensional system, they do mention multi-dimensional arrays of system state, but only in passing.
Yes, the math is essentially the same. I could describe a humane-behaving AI with probably as little as eight or ten dimensions, I think — the emotional component at least. But dealing with such objects is very hard, debugging something like this will be hell. This won’t be practical.
Today’s million dollar question is: Can we simplify the multidimensional modelling enough so that it can be actually dealt with? Can we, instead, just collect data by asking sample people numerous questions about the most interesting points in multidimensional space? Will the resulting behaviour pattern have anything to do with reality rather than their imagination?
Oh, and a trillion dollar question: Just where the social imagination of reality begins and social reality ends? We divide between ‘official fiction’ and ‘actual social reality’ but where is the boundary where unofficial fiction stops being actual social (not personal) reality?
This could be vitally important.