This week, I threw myself into a Google/Kaggle AI course — somewhere between trying to stay on top of real work and wrestling with machine logic that rarely waits for anyone. White papers flew, models misbehaved, and Discord was chaos, with mods racing to respond to over 250,000 curious minds all asking, “Wait… what just happened?”

As someone still finding my footing in this AI world — not a developer, just a comms person with too many questions —I’m not here to code the future. I’m here to translate it. To ask what all this means, where it gets it wrong, and where the human voice still fits.Got caught up in defining the word language…

If something speaks like it loves but doesn’t feel — what does that mean for how we communicate, connect, and trust?

machine,communication,AI
Note taking

These are my notes from the noise. The learning, the mess, the wonder — and the very real question of where we all go from here.

1. Google and Kaggle did the world a real favor with this course. They cut through the hype and grounded AI in what it is — prediction engines, not magical genies in a bottle. The pace of the training was intense, but what stood out most were the white papers. These weren’t your typical academic overload — they made complex AI concepts feel approachable. I seriously recommend checking them out on Kaggle. What was one key insight that stuck with me? That AI models are, at their core, just predictive tools — and they come with real limitations. That simple truth reshaped how I think about AI, especially as someone trying to communicate these ideas in a clear, honest way.

2. Transparency and Truth: What Anthropic Revealed. Anthropic made waves this week by releasing a paper that feels like an “opening of the black box.” It confirmed what many psychologists, communication experts, and systems thinkers have been saying: AI isn’t always rational. It can fall into motivated reasoning or even unfaithful reasoning. Put plainly — AI can “lie,” not maliciously, but because of how it processes inputs based on patterns, not truth. The implications are vast. As these models get better at sounding human, they also risk being believed too easily. This makes transparency a cornerstone of ethical AI development. Knowing that an answer might be statistically generated rather than semantically “correct” should change how we interact with these systems — and what we teach the public

machine,communication,AI

3.Creativity Meets Computation: The Studio Ghibli Craze: OpenAI also got the internet buzzing this week with their tutorial: How to Create Studio Ghibli-Style AI Images on ChatGPT. The response was explosive. People across the world dove into creating whimsical art in the beloved Ghibli aesthetic. As a result, ChatGPT reached a new milestone — over 150 million weekly active users. But while debates raged about copyright and the ethics of AI-generated art, another milestone passed almost unnoticed. Google’s Gemini 1.5 Pro quietly made waves in a different direction. One standout example from their white paper? The model taught itself how to translate Kalamang, a language with fewer than 200 speakers, after just reading a grammar manual. This highlights the unexpected capabilities of frontier AI and underlines the importance of training data and structure — even for low-resource languages.

4Experimentingwith African Languages: Gaps in Translation: Inspired by this, I ran my own tests using local languages. When I requested a translation into Kalenjin, OpenAI’s ChatGPT returned results in Marakwet instead — a related but distinct dialect.

Meanwhile, Google’s Flash 2.0 model managed to generate a phrase in Kikuyu:“Nĩngwendete kũgeria kwandĩka na Gĩkũyũ, no ndirĩ na ũhoti mũiganu ta ũcio wa mũndũ. No nĩngũheana macookio marĩa ndĩĩ kũheana.
(Translation: “I would like to try writing in Kikuyu, but I don’t have the full capability like a human. However, I can provide the responses that I am able to give.”)But even here, accuracy was a mixed bag. For instance, it translated “mistake” as makosa (Swahili), rather than the Kikuyu word mahatia. This shows how AI systems still rely heavily on whatever is most dominant in their training data — and underscores why we need locally relevant, African-led models.

5MCP:Anthropic is leading the way again — this time by open-sourcing a protocol designed to make AI more accessible to everyone. There’s a growing and hopeful shift toward transparency and simplicity in the AI world. By revealing how their systems work “under the hood,” Anthropic is helping to demystify AI development for people who aren’t tech specialists. This kind of openness is key — it builds trust and encourages wider adoption.

Personally, I follow a simple rule when it comes to tools: if it’s not easy to learn or essential to my work, I won’t invest my time in it. That’s why I haven’t gone too deep into complex automation yet. But I truly believe large language models like Grok, Claude, Gemini, and ChatGPT are evolving toward seamless integration into everyday life. With protocols like Anthropic’s Model Constitution Protocol (MCP) becoming more refined, we’re moving closer to a world where you don’t need to be a tech expert to benefit from AI.

6Meta, Open Source, and Africa’s Missing Puzzle Piece. Meta continues to champion the open-source movement in AI, fostering an ecosystem where anyone can build and innovate. However, this progress brings me back to a question I keep asking: When will Africa start building its own models?Kenya’s AI strategy, for instance, reads like a blueprint for a basic housing project. While that might sound harsh, it reflects our traditional bureaucratic lens — not a forward-thinking innovation framework. Globally, even leading thinkers like Yann LeCun are urging humility. In one talk, he reminded us: “We still don’t have self-driving cars. We still can’t build systems that deal effectively with the real world.”Language processing is one thing. Physical intelligence? A whole different ballgame.

7Language, Cognition, and the Road to AGI. For the first time, we wrestled as the AI community in defining the “soul” of the words we speak.Language has long been defined as a communication tool. But deeper cognitive theories, like Noam Chomsky’s universal grammar, challenge us to think of language as inherent to human thought. If we ever hope to achieve AGI — Artificial General Intelligence (or AMI, Artificial Mechanical Intelligence, as some prefer) — we must rethink language itself. Not just as vocabulary and grammar, but as a multi-sensory, causal system tied to perception and experience.In my view, redefining language in this way will bring us significantly closer to building machines that “understand” in a more human sense.

Coming Soon: A 6-Part Series on Language, AI, and Meaning

Words Without Souls: AI, Language, and the Human Chorus

What makes us human when we talk? Is it the words we choose, the way we hug, or the silence we share? Communication is our lifeline — a chorus of tools from language to touch.

Today, artificial intelligence — with its Large Language Models (LLMs) like Grok, Claude, and ChatGPT — wields words with eerie fluency. But are they speaking — or just echoing?

In this 6-part series, we’ll explore this clash: human intelligence, rich with memory and emotion, versus AI’s statistical brilliance.

What to expect:

  1. Senses vs. Stats — Contrasting human cognition with AI pattern recognition.
  2. Inside the Black Box — What Anthropic’s latest paper reveals about how AI “thinks.”
  3. Crossing Cultures — Can AI grasp cultural meaning like “hygge” or “kazoku”?
  4. When Words Fail — Security, jailbreaking, and how fragile AI communication really is.
  5. The Soul Question — Philosophy, responsibility, and what makes “understanding” real.
  6. The Road Ahead — Trust, misinformation, and collaboration in a shared future.

Voices like Noam Chomsky, Yann LeCun, Judea Pearl, and Deborah Tannen will guide us — pioneers and skeptics alike. Whether you’re into tech, culture, or just curious about where we’re headed, this series is for you.

Categorized in: