The term "AI" has become meaninglessly broad.
It encompasses everything from a recommendation algorithm to large language models to computer vision systems. "Smart systems" aka "extremely logically code" have been embedded in our infrastructure forever.
Truly, well done to the marketing and sales teams. 👏
Calling an LLM like ChatGPT, Claude, or Gemini a "probability engine" is much more accurate. I would hedge that users and consumers would likely interact differently with those products simply from that branding bias.
What we call "AI" is really robust and sophisticated rule following logic at scale with large data sets. AI is a pattern matching system. Calling it "AI" makes it sound revolutionary and it also makes it sound like it will produce human like reasoning.
We've had globalized technological dependencies for decades... but we've tech-washed that, obfuscating the dirty reality that is just an infrastructural tangled mess of legacy codebases.
We've embedded "smart systems" in everything from search engines to power grids to financial trading and have created a dependency without most of us really knowing what we're depending on.
When everything is "AI" it becomes much harder to have nuanced conversations about which applications are beneficial, which are problematic, and which are literally just overhyped automation.
Offline large language models that are updated periodically with new models and can be stored in tablets, phones, and computers will revolutionize education in resource-scare regions and poverty-stricken populations.
Overblown "AI" rhetoric and corporate policy positioning around these tools will continue to undermine the public support and trust of these tools, that would otherwise be valuable educational infrastructure.
We cannot let the terrible PR that AI is getting ruin that otherwise inexpensive global educational revolution.