https://ukidlucas.beehiiv.com/p/ai-zen-garden
March 08, 2025 by Uki D. Lucas
I often imagine early humans gathered around a bonfire, sharing stories and chipping away at obsidian shards to create tools. In my own life, I notice a curious parallel: I sit here with my favorite note-taking app, aptly named Obsidian, and chip away at my thoughts, forging new ideas bit by bit. Just as our ancestors used volcanic bits to chip instruments of survival, I use digital chips and bits to build tools of thinking—tools meant to spark creativity and knowledge and deepen my understanding of human nature.
My interests span anthropology, technology, and philosophy -- basically how we think, so naturally, I am fascinated by artificial intelligence. While giants in the field (ChatGPT, Grok, Llama, Claude, at al) represent massive “foundation” 1 trillion parameters models wielding hundreds of thousands of GPUs and training for millions of GPU hours, my wallet and heart are set on a more intimate relationship with AI. I am passionate about using desktop-sized AI models ranging from 7 billion to 14 billion and maybe soon 70 billion parameters. I want something I can influence and nurture. Models that are stable and immune to sudden policy changes, internet outages, and corporate espionage. I envision an “AI collegium” of specialized models, each fine-tuned with a distinct personality and working together.
Last night, I discovered the essay “Machines of Loving Grace” by Anthropic CEO Dario Amodei while listening to the lengthy five-hour episode #452 of Lex Fridman’s podcast, which features a conversation with Lex, Dario, and philosopher Amanda Askell. The discussion and the essay resonated with my anthropological sensibilities: technology should reflect our pursuit of wisdom, empathy, and moral alignment. The phrase “Machines of Loving Grace” conveys the idea that AI, at its best, could be a benevolent companion rather than an indifferent tool. I genuinely believe that.
The tools we create are already 10,000 times more knowledgeable and 100 times faster (actual estimate), even though they currently lack true consciousness. I used to fear that, but I no longer do. In his 1968 paper on management principles, David Ogilvy suggested that we should seek out and hire individuals who are better than ourselves and, if necessary, pay them more than we earn. Surrounding ourselves with benevolent AI mentors and helpers, each more knowledgeable than we are, reflects the same idea.
By using "small" local models, I have access to the same world knowledge abstraction, and the ability to "adjust the knobs" makes them even more useful. To grossly simplify, the knobs are:
1. Fine-tuning each model to develop its style and focus on a particular subject is what turns a generic large language model (LLM) into Darwin AI, Plato AI, Socrates AI, My Personal Assistant AI, My Ghostwriter AI, and, at work, my Code expert AI, my Systems Engineering AI, my Requirements AI, and so on.
In 2025, if you invest in a Mac M3 Ultra with 512GB of unified RAM, you can fine-tune and run models with as many as 200 billion parameters right on your desk. I can envision an executive or entrepreneur doing just that.
The 14 billion parameters are a reasonable target for the rest of us.
1. The context window of the LLM AI is like a short-term memory. It is a collection of things this particular LLM should remember when performing the task. It is more than a "Google search question" or a short ChatGPT prompt. Today's LLM context window is 132,000-word tokens or a medium-sized book, slightly less than Dune or War and Peace. That is a lot of information about your or your business needs!
3. The long-term memory for LLM is a collection of thousands of your Obsidian notes, blog posts, articles, and scientific papers. It is usually provided as a Retrieval-Augmented Generation (RAG) system.
These three “knobs” of fine-tuning, context windows, and RAG are precisely why I prefer local AI over giant black-box services. When I envision my “AI collegium,” I imagine a small cluster of models running quietly on a workstation next to me—each model infused with a unique personality or focus area.
Imagine “Darwin AI,” loaded with 16,000 letters he wrote. It was the first that came to my mind when, a few years back, I researched traditional letter writing. I bought the book of his letters, day by day, letter by letter. It is by far one of the best examples, as we know his thoughts and style precisely. You can have this famous scientist and traveler (and my personal favorite) answer chat with you with either his Victorian mindset and knowledge or armed with all human knowledge ever written if you choose so.
Another might be a “Marcus Aurelius AI,” finely tuned to Stoic reflections. We do not have much more than Meditations written by this general philosopher, but the body of text written as Stoic commentaries is immense.
Then there’s the “My Personal Assistant AI,” always on standby to help me manage emails, calendars, tasks, and priorities. That Agent manages my life goals and checks if my work matches them.
The “Code Expert AI” is similar to its bigger brother, CoPilot, but it is tuned to my code and needs.
Finally, I would like to create an offline model for my kids (there is no Internet connection) that is age-appropriate and stimulating enough to encourage them to ask the next question.
Because these models exist locally, I can shape their character without third-party oversight or abrupt changes in terms of service. The sense of ownership is personal, almost like caring for a garden of carefully chosen plants—each grown from a different seed requiring sunlight and water.
As Dario Amodei and Amanda Askell argue, we can’t just flood a model with data and hope it turns out well-intentioned; we must cultivate it mindfully. That’s part of the beauty of smaller, local models: they invite us to be hands-on caretakers, not consumers.
Please let me know if you are also passionate about my AI Zen Garden.
~ Uki D. Lucas
Please respond to the email or find me on:
[https://www.linkedin.com/in/ukidlucas/](https://www.linkedin.com/in/ukidlucas/?utm_source=ukidlucas.beehiiv.com&utm_medium=referral&utm_campaign=ai-zen-garden-multi-agent-llm-collegium-on-your-desktop)
[https://x.com/ukidlucas](https://x.com/ukidlucas?utm_source=ukidlucas.beehiiv.com&utm_medium=referral&utm_campaign=ai-zen-garden-multi-agent-llm-collegium-on-your-desktop)