I posted a Question topic “Ensuring best outcomes from AI disruption” to the Gleam community. I shall copy the text below …
As an activist with a strategic mindset I note how so much activism that is out there is very haphazard, inefficient, and may even backfire. And with the mad AI hype cycle things are no different. Opponents of AI do awareness-raising, give stern “though shall not use this” warnings, and who does use AI against their advice gets dealt out strong judgment, dogpiling, cancellation, etc. All kinds of parasocial social interactions and dynamics that are very detrimental to healthy ecosystem formation.
Today I am focusing on Social experience design (SX), solution development for grassroots movements, and (hopefully) triggering emergence of a Social coding commons. A sustainable movement that is in control of its future i.e. is “commons based” and able to responsibly evolve technology ecosystems and the solutions they need to support.
For SX I defined the notion of CALM culture, which stands for Constructive activism-led movements, where constructive activism becomes part of the culture and a steering/guiding force for the ecosystem’s evolution. Activism is only constructive if it serves a purpose, and offer a path towards a solution. There’s a process from awareness-raising to winning people over and involve them in collaborative efforts on the solution-side. Get healthy outcomes, best results.
How do we do that for Gleam ecosystem?
What will be the impact of AI on the Gleam ecosystem and community, and how can we have best outcomes from all the ongoing AI technology introductions? That is the topic of this tread.
I only recently started experimenting with LLM’s - more as Sun Tzu to know thy enemy, but I am open to also find friends - and I notice a whole fragmented and dispersed body of work on best-practices to use AI and get better outcomes.
What Gleam might offer is a best-practices hub, a pattern library and documentation that informs on dangers, and then emphasizes and highlights the gems, the useful application areas and how to stick to those.
One example of such best-practice. I just signed up to Anthropic to testdrive Claude Code, and bump into the ‘Personal preferences’ prompt box in my account settings. This is a first opportunity to copy/paste “the way gleam does things” from a template.
Posted [to Social coding commons] and the fediverse…
I mention Risks there, and that may be a good place to start. Currently the gleam community is but small, and a cozy place. The real AI onslaught is primarily going on in the most popular languages e.g. TypeScript. For example Cloudflare who after a couple own attempts, now with help of AI created NextJS and launched as Vinext, which was immediately taken into production by an US government agency. After only one week of vibe coding and spending $ 1,100 in tokens. The wait is out whether this will prove to be a disaster or part of a pivotal inflection point that will forever change the IT landscape.
To what extent does Cloudflare understand their own Vinext codebase atm, for example?
Is that even relevant anymore, if AI’s become true coding experts?