(Originally posted in Social experience design chatroom.)
An interesting essay about the meaning of the word “dehumanization”. Like so many important words the use of the word varies significantly with people. The author focuses more on the what I would say are extreme forms (he calls them demonizing dehumanization). But I think society is also plagued by softer forms that may not even be recognized as such. Its the objectifying kind, where people simply become numbers and are treated as numbers. Which in the digital age is almost by default: For the techie working in behavioral classification and influence algorithms in Meta, all of humanity is but a dataframe. They have no emotional connection with the individuals on the receiving end of their actions. Unfortunately one of the principal factors seems to be the sheer mismatch between our ability to internalize others as individuals just like us and the total number of “others”. This mismatch of course predates digital, it is a fundamental challenge towards reaching humane societies…
This soft dehumanization is so systemic and pervasive its completely normalized. Think e.g., about how companies (the overdominant pattern of organizing economic activity) report on their activities. There are hundreds of pages, numbers and classifications about all sorts of things, but the actual people - the very essence of the entity - are basically a summary cost figure. So many thousand warm bodies, costing so much. Before we had adtech dataframes we had spreadsheets
, but the way we use whichever tools reflects the mental models of our condition, which can be extremely biased and limiting.
That kind of activism is needed and essential. But only when properly dosed, and part of a more strategical approach. The ultimate goal is to get people in concerted solution orientation collaborating together. The Punishment should not be doled out too easily, which - esp. in fediverse/FOSS circles - it currently is, imho. When that is the case, and there is no CALM culture, then the activism is likely to become ineffective or even backfires. You will lose the AI enthusiast and push them to the opposite camp, not help them improve their ways with AI, for instance.