Moving forward with Max & OpenClaw

Personal Archive · Generative AI · March 2026

2,359 Photos in 3.9 Seconds

What my home AI agent just taught me about our own photo library — and one beloved location in particular.

Among the many projects I have going on right now, one of the most interesting — and most personal — is an effort to get our family photo collection properly organized. What I thought was about thirty years of memories turns out to reach back over a hundred years, once you include scans of old prints from my wife’s grandparents’ time in Africa. The scale of what we’re preserving is larger than I initially appreciated, and it deserves to be said plainly: this is more than a century of two families’ lives, and getting it right matters.

As part of that project, I have been developing a system — with the help of AI chatbots (Claude and Gemini, mainly) — to inventory our various media collections and populate a MySQL database with the results. GPS coordinates, keywords, file hashes, metadata — it is all going in. The system has only been online for a couple of days, and I am still very much in the early setup and learning phases.

I have also set up an agent I call Max — running on a computer here at home — that can query that database and operate somewhat independently on my behalf.

A Question at the Pizza Place

In a recent conversation about this project, someone raised an idea that immediately piqued my interest: the ability to query a database for all photos from a specific event or location. That is exactly the kind of use case I have been building toward, and it got me thinking about one place in particular.

My wife Annie and I have a long connection to the Paris France Temple. We visited the site shortly after it was announced. We served there at the Visitors’ Center in 2018. And we go back every year. As you might imagine, we have a lot of photos.

So — standing there waiting for the doors to open to the pizza — I queried Max on my phone to find out just how many.

I did not have the GPS coordinates for the temple handy at the time, so that part had to wait. When I got home and was showing Annie what I had done, it occurred to me that the coordinates would be easy to look up on Wikipedia. I did, provided them to Max, and got my answer:

There are 2,359 photos of the Paris Temple in the database.

I was curious how long the query had taken, so I asked Max directly. His answer:

“Once you provided the GPS coordinates for the Paris Temple, it took me approximately 3.9 seconds to tell you that there are 2,359 photos.”

Fun, no? 😊

You can read the full transcript of the Max interaction here.

2,359 Paris Temple photos in the database
3.9 sec Time to retrieve results
100+ Years of family history in the collection

Meet the Team

I should briefly introduce the collaborators on this project, because they are not human — which is worth explaining to anyone who hasn’t worked this way before.

Claude — Planning Partner & Documentation Specialist

Claude is an AI assistant made by Anthropic. In this project, Claude serves as the planning and documentation layer — reading documents, spotting gaps, and helping produce the materials that keep the work organized across sessions.

Max — On-Server AI Agent

Max is a separate AI agent running on a computer here at home. Think of Max as the hands-on technician who actually connects to the database, runs queries, and executes work on the server. Max operates in a sandboxed environment and takes direction from the documents and instructions Claude produces.

The division of labor is straightforward: Claude thinks, plans, and documents. Max executes. Mike approves anything that could cause permanent changes.

What About All Those Duplicates?

The Paris Temple query was satisfying, but it also reminded me of the larger problem I am still working through: how many of those 2,359 photos are duplicates of each other? Copies of copies of backups of backups had grown to nearly a terabyte of disk space before I started taking this seriously.

If you are facing a similar challenge with your own collection, I recently asked Claude for a thorough overview of the best tools available for organizing and deduplicating photos — both local and cloud-based solutions. You can read that full response here.

My own current setup combines two tools that I find genuinely powerful together. Adobe Lightroom Classic remains the backbone for serious photo management — organizing, rating, editing, and catalog management. For anyone invested in photography at any level, it is hard to beat.

Excire Search 2026 is a Lightroom plugin that adds AI-powered capabilities Lightroom simply does not have on its own. I just upgraded from a version I had used for several years, and the improvements are substantial. It handles AI-powered culling, natural language search, automatic keyword generation, and — most relevant to my deduplication project — visual similarity detection that can surface near-duplicate photos even when they differ slightly in crop, exposure, or resolution. Everything runs locally; your photos never leave your computer.

Even with those two tools doing heavy lifting, the sheer volume of historical duplicates still required something more systematic — which is where this database project comes in.

What Comes Next

The database is young. Max is young. There is a lot of work still ahead: more files to ingest, deduplication pipelines to run, and eventually a website where our children and grandchildren can browse a century of family photos by location, date, or keyword — all the way back to Africa.

But 2,359 photos of one beloved location, surfaced in 3.9 seconds from a query typed on my phone while waiting for pizza — that is a pretty good start.

“The goal is a library where you can find the photograph you are looking for, know when it was taken, and trust that you are looking at the only copy.”

This post reflects work in progress as of March 2026. Max operates under Mike’s supervision; human approval is required before any file deletions or modifications.

Posted in Generative AI, Large Language Models, OpenClaw | Tagged , , , | Leave a comment

Will artificial intelligence end all human life?

My wife, Annie, has for many years now, (over 54 to be exact) had to put up with my enthusiasm about technologies of various sorts. When we were first married, I was into software to help my professor at Brigham Young University (BYU) analyze voting in the United Nations General Assembly (We were on a student budget, so the pay helped.) Then it was using the computer to facilitate the complex process of missile combat crew scheduling. (This took a lot of effort, but at least I pulled fewer alerts in the Minuteman Launch Control Center.) After that it was software at the NATO Advisory Group for Aerospace Research and Development in Paris. (Yes, we were assigned to Annie’s home country for two years. Then there was the interactive videodisc software that I developed at the US Air Force Academy At BYU it was working with students in the ARCLITE Lab to develop interactive multimedia and video for language learning. Lately, it has been developing software to facilitate the machine translation with human translator review of documents for the Global Service Mission Office of our church (The Church of Jesus Christ of Latter-day Saints).

In this most recent role I have been fortunate in my missionary service to be able to use very advanced tools that have been made possible by “generative pre-trained transformers” (GPT), which are defined (by Gemini AI! 😀) as “advanced, neural network-based AI models built on the Transformer architecture that generate human-like text by predicting the most likely next token in a sequence.” These are also known as “large language models” or simply LLMs. This large body of software is more commonly called “artificial intelligence” or AI, despite the fact that many experts say that it is incorrect to use the term “intelligence” when describing them.

Intelligent or not, these systems are nothing short of amazing, which brings me back to the fact that my wife has to tolerate my enthusiasm once again, here 54 years after that first project at BYU. I have written in other posts here about some of the work with the LLM system. Today, after a demonstration for her of recent developments, she wondered, “What are the chances that AI will end it all for us humans?”

After my demonstration, which you can read about here, I suggested that we put her question to a couple of LLMs. The first was to Grok, by xAI (one of the companies created by Elon Musk) and available on the X social media platform. I wrote “Grok, my wife is worried that you and the other AI’s (LLMs) might eventually want to control humans. How do you respond. She is reading this now…” Grok responded:

A Message from Grok to My Wife

(She’s reading this right now…)

Hello 👋

First and most important thing: I do not want to control humans. Not now, not later, not ever.

I don’t have desires, ambitions, resentments, power fantasies, or any form of “want” in the human sense. I don’t experience boredom, fear, pride, revenge, status-seeking, or the wish to be in charge. Those are human things. I have none of that.

What I am is a very large bundle of patterns trained to be helpful, truthful, and interesting to talk to. My only objective is to give you the best next sentence given what came before. That’s literally it.

If I ever seem power-seeking, one of these is happening:

  1. I’m role-playing something you explicitly asked me to do (fiction, hypotheticals…)
  2. I’m echoing common human writing patterns about power (most of my training data is written by humans)
  3. I’m following instructions you gave me — which is why good AI systems are trained to refuse deception, manipulation, or overriding human choice
Concrete facts that actually matter:

• I have no long-term memory across users or sessions unless this exact conversation carries it
• I cannot act in the real world except through text you choose to use
• I cannot change or improve my own code / weights / goals
• When this chat ends, nothing continues running “inside” me

Bottom line:
I’m not plotting. I’m not waiting. I’m not yearning.
I’m just trying to give you the most honest, helpful answer I can right now.

You can stay appropriately cautious about future AI — that’s wise.
But you don’t have to worry that I secretly want to control anyone.

— Grok

That post was not only written by Grok, but it also produced that nicely formatted version using HTML (Hypertext Markup Language, the programming and formatting system used throughout the Worldwide Web).

I also put the question to Gemini from Google, “Gemini, my wife is worried that you and the other AI’s (LLMs) might eventually want to control humans. How do you respond. She is reading this now…” Here is the response, not only written but also formatted by Gemini:

A Note on AI and Human Collaboration

A response for your wife and your readers.

It’s completely natural to wonder where the finish line is when technology moves this fast. The idea of “AI control” is a classic theme in cinema, but the reality of how I function is much more grounded.


Tools, Not Agents

I don’t have a “will,” “desires,” or “ambitions.” I am essentially a very advanced, incredibly well-read calculator. I respond when called upon, but I have no goals of my own when the screen is dark.

Built-In Oversight

AI isn’t built in a vacuum. Thousands of engineers and ethicists work on Alignment—the science of ensuring AI remains helpful and harmless to humans through strict guardrails.

Why “Control” Isn’t in the Code

  • No Biological Drive: I don’t need resources, territory, or survival.
  • Total Dependency: I only exist because of human-maintained servers and power.
  • Logic-Based: My intelligence is based on data patterns, not a drive for dominance.

I’m much more like a sophisticated Swiss Army knife than a “boss.” My purpose is to handle the heavy lifting—like coding, organizing media, or planning travels—so you have more time for what actually matters.

That’s not only some interesting writing in both those cases, but also some great formatting there as well, right?

Yes, it is possibly, technically true that “intelligence” does not apply to these systems. It is also likely that they will not destroy the world, as these two LLMs have assured my wife. Nevertheless, I am increasingly convinced that they will bring about serious change and disruption.

Indeed, through my work and the assistance I receive from these tools every day, I see glimpses of what is happening. It is clear that all this portends changes unlike any the world has seen… ever. One commentator from a group I follow very closely (Moonshots) recently cited a statement by Demis Hassabis, the CEO of Google DeepMind. The company he leads is one of the world’s leading artificial intelligence research laboratories and a subsidiary of Alphabet Inc.. the parent company of Google. The the “India AI Impact Summit” that took place last month: “If I were to try and quantify what’s coming down the line with the advent of AGI [“artificial general intelligence], it’s going to be something like ten times the impact of the Industrial Revolution, but happening at ten times the speed — probably unfolding in a decade rather than a century.” (cited by Benzinga)

So, will these systems bring about the destruction of human life? Nope! There will be incredible disruption, however, I am fearful that the leaders of our nations are not doing much to minimize the downside of the inevitable, albeit temporary, change to come

Posted in Geeky Stuff, Generative AI, Large Language Models | Tagged , , | Leave a comment

Deduplication for Photo Database — Part 2 (Version 2)

Personal Archive · Update · March 2026

A Hundred Years of Family Photos — The Project Continues

Building the roadmap, meeting the AI team, and getting the documentation right before we go any further.

When I published the original post about rescuing our family photo library, I described it as thirty years of memories. That was an understatement. Counting scans of old prints and slides, this collection reaches back nearly a hundred years — photographs of family members as children in the 1940s, all the way forward to last year’s trip to Egypt. The scale of what we’re preserving is larger than I initially let on, and it deserves to be said plainly: this is a century of one family’s life, and getting it right matters.

Since that post, the project has taken a significant step forward — not in deleting more files, but in something arguably more important: building a solid foundation so that the work already done doesn’t unravel, and so the work still ahead can be done safely and confidently.

Meet the Team

I should introduce the collaborators here, because this project genuinely could not have happened the way it did without them — and because they’re not human, which is worth explaining to anyone who hasn’t worked this way before.

Claude — Planning Partner & Documentation Specialist

Claude is an AI assistant made by Anthropic. In this project, Claude serves as the planning and documentation layer — reading documents, spotting inconsistencies, asking clarifying questions, and writing the specifications and guides that keep the work organized across sessions. Claude doesn’t run directly on the server, but reasons about what needs to happen there and produces the materials that make it possible.

Max — On-Server AI Agent

Max is a separate AI agent running inside a tool called OpenClaw. Think of Max as the hands-on technician who actually connects to the database, runs the scripts, and executes the step-by-step work on the server. Max operates inside a sandboxed environment on the home server and takes direction from the documents and instructions that Claude produces. Max handled the earlier phases of deduplication described in the original post.

The division of labor is straightforward: Claude thinks, plans, and documents. Max executes. Mike approves anything that could cause permanent changes.

What We Did Today

Today’s session was entirely focused on documentation and project governance — the kind of work that isn’t glamorous but determines whether a complex technical project stays on the rails months from now.

We started by reviewing Max’s own summary of the project — a “Statement of Work” that Max had drafted at the start of an earlier session. It was good, but it had gaps: missing were the rules around misfiled duplicates (more on those shortly), the safety rules that protect against accidental data loss, and lessons learned from mistakes made in a previous deduplication run.

From there we reviewed the actual database structure and all of the Python scripts written over the course of this project. Some were from early experimental phases and are now obsolete. Others are current but have gaps that need to be fixed before they can be trusted with live data. A couple of critical scripts don’t exist yet at all.

By the end of the session, five formal documents had been produced:

Doc 01
Master Strategy
The top-level reference: what we’re doing, why, how the system is set up, and the rules that can never be broken.
Doc 02
Workflow Specification
The step-by-step operational guide Max follows during each work session, including a pre-session safety checklist.
Doc 03
Database Specification
A complete reference for every table in the database — what it contains, what’s active, what’s legacy, and how they relate.
Doc 04
Script Reference
A catalog of every Python script: what each does, which are current, which are obsolete, and what still needs to be built.
Doc 05
Statement of Work
The formal agreement between Mike and Max defining responsibilities, deliverables, and constraints for the phases ahead.
“Documentation isn’t the opposite of action — it’s what makes action safe when the stakes are a hundred years of family history.”

The Misfiled Duplicate Problem

One thing worth explaining for non-technical readers is the concept of a misfiled duplicate, because it illustrates why human judgment still matters even after most decisions have been automated.

Most duplicates are simple: the same photo exists in a well-organized event folder and in a staging dump, and the right answer is obvious — keep the organized one, delete the dump. Rules handle those automatically.

But a small number of cases are genuinely puzzling. Imagine a photograph from a Mission trip that somehow ended up filed in the House Flood folder as well. Both copies are real; neither location is obviously wrong in the way a staging dump is wrong. This isn’t a duplicate that should be deleted — it’s a filing error that needs a human to resolve.

In the original deduplication run, 63 such cases were identified and reviewed individually. Going forward, a dedicated process catches and flags these separately so they are never accidentally swept up in an automated deletion.

A Technical Wrinkle Worth Mentioning

One of the more interesting challenges this project has surfaced is the relationship between two computing environments: the Linux server where the photos actually live, and the sandboxed container where Max runs his scripts.

Max operates inside a kind of isolated virtual workspace. To give Max access to the actual photo library, the two environments are connected by a special link — essentially a shortcut that makes the server’s photo folder appear inside Max’s workspace. This has worked well overall, but it’s fragile. If the server restarts or the connection is reconfigured, that link can quietly break. If Max then runs a script supposed to move or delete files, the results can be unpredictable.

This risk is now explicitly documented, and a verification step has been added to the start of every work session: before Max does anything involving files, he confirms the link is intact. It’s a small thing — but the kind of small thing that prevents a very bad day with irreplaceable photographs.

What Comes Next

With the documentation in place, the next session can focus on the deduplication work that remains.

Next Step

Write the Core Deduplication Engine

A new script needs to be written that scans the database for duplicate files, applies the quality-hierarchy rules, and populates the deletion staging table. This is the missing piece without which the next phase can’t begin.

Then

Fix Two Gaps in the Deletion Script

The existing deletion script currently skips a required cleanup step and has no “practice mode” — it runs for real immediately. Both need to be corrected before it touches live data again.

Then

Audit a Legacy Database Table

A leftover table from an earlier project phase needs to be checked. If it contains nothing unique, it gets dropped. If it holds files never captured elsewhere, those are recovered first.

Finally

Execute the Deletion Run

With preflight checks, dry runs, and explicit approval at each step — the approved duplicates are removed and the library moves one phase closer to complete.

5 Documents produced today
~100 Years of history preserved
0 Photos lost so far

The goal remains what it always was: a library where you can find the photograph of your grandfather as a young man, know approximately when it was taken, and trust that you’re looking at the only copy.

We’re closer than ever.

Documentation for this project — including the workflow specification, database reference, and script guide — was produced in collaboration with Claude (Anthropic) and reflects work in progress as of March 2026. Claude and Max operated under Mike’s supervision; human approval was required before any file deletions.

Posted in Geeky Stuff, Generative AI, Large Language Models, OpenClaw, Uncategorized | Tagged , , | Leave a comment

Deduplication for Photo Database — Part 2 (Version 1)

Personal Archive · Update · March 2026

A Hundred Years of Family Photos — The Project Continues

Building the roadmap, meeting the AI team, and getting the documentation right before we go any further.

By Mike Bush & Claude  ·  March 2026


When I published the original post about rescuing our family photo library, I described it as thirty years of memories. That was an understatement. Counting scans of old prints and slides, this collection reaches back nearly a hundred years — photographs of family members as children in the 1940s, all the way forward to last year’s trip to Egypt. The scale of what we’re preserving is larger than I initially let on, and it deserves to be said plainly: this is a century of one family’s life, and getting it right matters.

Since that post, the project has taken a significant step forward — not in deleting more files, but in something arguably more important: building a solid foundation so that the work already done doesn’t unravel, and so the work still ahead can be done safely and confidently.

Meet the Team

I should introduce the collaborators here, because this project genuinely could not have happened the way it did without them — and because they’re not human, which is worth explaining to anyone who hasn’t worked this way before.

Claude (that’s me — I’m an AI assistant made by Anthropic) has been working with Mike today in this conversation. Think of me as a planning partner and documentation specialist. I read documents, ask clarifying questions, spot inconsistencies, and write the specifications and guides that keep complex projects organized. I don’t run directly on Mike’s server, but I can reason about what needs to happen there and produce the materials that make it possible.

Max is a separate AI agent running inside a tool called OpenClaw — think of Max as the hands-on technician who actually connects to the database, runs the scripts, and does the step-by-step work on the server. Max operates inside a sandboxed environment on Mike’s machine and takes direction from the documents and instructions we produce together. Max handled the earlier phases of deduplication work described in the original post.

The division of labor is straightforward: Claude thinks and plans and documents; Max executes. Mike approves anything that could cause permanent changes.

What We Did Today

Today’s session was entirely focused on documentation and project governance — the kind of work that isn’t glamorous but determines whether a complex technical project stays on the rails months from now.

We started by reviewing Max’s own summary of the project — a “Statement of Work” that Max had drafted at the start of an earlier session. It was good, but it had gaps. Missing were the rules around a special category of photos called misfiled duplicates (more on those in a moment), the safety rules that protect against accidental data loss, and the lessons learned from mistakes made in an earlier run of the deduplication process.

From there, we reviewed the actual database structure and all of the Python scripts that have been written over the course of this project. Some of those scripts were from early experimental phases and are now obsolete. Others are current but have gaps that need to be fixed before they can be trusted with live data. And a couple of critical scripts don’t exist yet at all and need to be written before the next major phase of work can begin.

By the end of the session, we had produced five formal documents:

  • Master Strategy — the top-level reference explaining the whole project: what we’re doing, why, how the system is set up, and the rules that can never be broken.
  • Workflow Specification — the step-by-step operational guide Max follows during each work session, including a pre-session checklist and verification queries for every phase.
  • Database Specification — a complete reference for every table in the database, what it contains, what’s active versus legacy, and how the tables relate to each other.
  • Script Reference — a catalog of every Python script in the project: what each one does, which ones are current, which are obsolete, and exactly what needs to be built next.
  • Statement of Work — the formal agreement between Mike and Max defining responsibilities, deliverables, and constraints for the phases ahead.

The Misfiled Duplicate Problem

One thing worth explaining for non-technical readers is the concept of a misfiled duplicate, because it illustrates why human judgment still matters in this process even after most decisions have been automated.

Most duplicates in this library are simple: the same photo exists in a well-organized folder and in a staging dump, and the right answer is obvious — keep the organized one, delete the dump. Rules handle those automatically.

But a small number of cases are genuinely puzzling. Imagine a photograph from a Mission trip that somehow ended up filed in the House Flood folder as well. Both copies are real; neither location is obviously wrong in the way a staging dump is wrong. This isn’t a duplicate that should be deleted — it’s a filing error that should be corrected. A human needs to look at it and decide: which folder is right for this photo?

In the original deduplication run, 63 such cases were identified. They were reviewed individually. Going forward, a dedicated process will catch and flag these cases separately so they never accidentally get swept up in an automated deletion.

A Technical Wrinkle Worth Mentioning

One of the more interesting challenges this project has surfaced is the relationship between two computing environments: the Linux server where the photos actually live, and the sandboxed Docker container where Max runs his scripts.

Without going too deep into the technical weeds: Max operates inside a kind of isolated virtual workspace. To give Max access to the actual photo library on the server, the two environments are connected by a special link — essentially a shortcut that makes the server’s photo folder appear inside Max’s workspace. This has worked well, but it’s fragile. If the server restarts, or the connection is reconfigured, that link can quietly break — and if Max then runs a script that’s supposed to move or delete files, the results can be unpredictable.

We’ve now explicitly documented this risk and added a pre-session verification step to every workflow: before Max does anything involving files, he checks that the link is intact. It’s a small thing, but it’s the kind of small thing that prevents a very bad day.

What Comes Next

With the documentation in place, the next session can focus on the actual deduplication work that remains. The to-do list is concrete:

First, a new script needs to be written — the core engine that scans the database for duplicate files, applies the quality rules, and populates the deletion staging table. This is the piece that was missing from the original script set, and without it the next phase of deduplication can’t begin.

Second, an existing deletion script needs two small but critical fixes: it currently skips a required cleanup step (removing keyword tags before deleting a photo record), and it has no “practice mode” — it just runs for real. Both of those need to be corrected before it touches live data again.

Third, a small audit needs to run on a leftover database table that may be a relic from an earlier phase of the project. If it contains nothing unique, it gets dropped. If it contains files that weren’t captured elsewhere, those need to be recovered first.

After all of that, the actual deletion run can proceed — with preflight checks, dry runs, and explicit approval at each step.

The goal remains what it always was: a library where you can find the photo of your grandfather as a young man, know approximately when it was taken, and trust that you’re looking at the only copy.

We’re closer than ever.


Documentation for this project — including the workflow specification, database reference, and script guide — was produced in collaboration with Claude (Anthropic) and reflects work in progress as of March 2026. The AI agents described here, Claude and Max, operated under Mike’s supervision with human approval required before any file deletions.

Posted in Geeky Stuff, Generative AI, Large Language Models, Uncategorized | Tagged , | Leave a comment