Deduplication for Photo Database — Part 2 (Version 2)

Personal Archive · Update · March 2026

A Hundred Years of Family Photos — The Project Continues

Building the roadmap, meeting the AI team, and getting the documentation right before we go any further.

When I published the original post about rescuing our family photo library, I described it as thirty years of memories. That was an understatement. Counting scans of old prints and slides, this collection reaches back nearly a hundred years — photographs of family members as children in the 1940s, all the way forward to last year’s trip to Egypt. The scale of what we’re preserving is larger than I initially let on, and it deserves to be said plainly: this is a century of one family’s life, and getting it right matters.

Since that post, the project has taken a significant step forward — not in deleting more files, but in something arguably more important: building a solid foundation so that the work already done doesn’t unravel, and so the work still ahead can be done safely and confidently.

Meet the Team

I should introduce the collaborators here, because this project genuinely could not have happened the way it did without them — and because they’re not human, which is worth explaining to anyone who hasn’t worked this way before.

Claude — Planning Partner & Documentation Specialist

Claude is an AI assistant made by Anthropic. In this project, Claude serves as the planning and documentation layer — reading documents, spotting inconsistencies, asking clarifying questions, and writing the specifications and guides that keep the work organized across sessions. Claude doesn’t run directly on the server, but reasons about what needs to happen there and produces the materials that make it possible.

Max — On-Server AI Agent

Max is a separate AI agent running inside a tool called OpenClaw. Think of Max as the hands-on technician who actually connects to the database, runs the scripts, and executes the step-by-step work on the server. Max operates inside a sandboxed environment on the home server and takes direction from the documents and instructions that Claude produces. Max handled the earlier phases of deduplication described in the original post.

The division of labor is straightforward: Claude thinks, plans, and documents. Max executes. Mike approves anything that could cause permanent changes.

What We Did Today

Today’s session was entirely focused on documentation and project governance — the kind of work that isn’t glamorous but determines whether a complex technical project stays on the rails months from now.

We started by reviewing Max’s own summary of the project — a “Statement of Work” that Max had drafted at the start of an earlier session. It was good, but it had gaps: missing were the rules around misfiled duplicates (more on those shortly), the safety rules that protect against accidental data loss, and lessons learned from mistakes made in a previous deduplication run.

From there we reviewed the actual database structure and all of the Python scripts written over the course of this project. Some were from early experimental phases and are now obsolete. Others are current but have gaps that need to be fixed before they can be trusted with live data. A couple of critical scripts don’t exist yet at all.

By the end of the session, five formal documents had been produced:

Doc 01
Master Strategy
The top-level reference: what we’re doing, why, how the system is set up, and the rules that can never be broken.
Doc 02
Workflow Specification
The step-by-step operational guide Max follows during each work session, including a pre-session safety checklist.
Doc 03
Database Specification
A complete reference for every table in the database — what it contains, what’s active, what’s legacy, and how they relate.
Doc 04
Script Reference
A catalog of every Python script: what each does, which are current, which are obsolete, and what still needs to be built.
Doc 05
Statement of Work
The formal agreement between Mike and Max defining responsibilities, deliverables, and constraints for the phases ahead.
“Documentation isn’t the opposite of action — it’s what makes action safe when the stakes are a hundred years of family history.”

The Misfiled Duplicate Problem

One thing worth explaining for non-technical readers is the concept of a misfiled duplicate, because it illustrates why human judgment still matters even after most decisions have been automated.

Most duplicates are simple: the same photo exists in a well-organized event folder and in a staging dump, and the right answer is obvious — keep the organized one, delete the dump. Rules handle those automatically.

But a small number of cases are genuinely puzzling. Imagine a photograph from a Mission trip that somehow ended up filed in the House Flood folder as well. Both copies are real; neither location is obviously wrong in the way a staging dump is wrong. This isn’t a duplicate that should be deleted — it’s a filing error that needs a human to resolve.

In the original deduplication run, 63 such cases were identified and reviewed individually. Going forward, a dedicated process catches and flags these separately so they are never accidentally swept up in an automated deletion.

A Technical Wrinkle Worth Mentioning

One of the more interesting challenges this project has surfaced is the relationship between two computing environments: the Linux server where the photos actually live, and the sandboxed container where Max runs his scripts.

Max operates inside a kind of isolated virtual workspace. To give Max access to the actual photo library, the two environments are connected by a special link — essentially a shortcut that makes the server’s photo folder appear inside Max’s workspace. This has worked well overall, but it’s fragile. If the server restarts or the connection is reconfigured, that link can quietly break. If Max then runs a script supposed to move or delete files, the results can be unpredictable.

This risk is now explicitly documented, and a verification step has been added to the start of every work session: before Max does anything involving files, he confirms the link is intact. It’s a small thing — but the kind of small thing that prevents a very bad day with irreplaceable photographs.

What Comes Next

With the documentation in place, the next session can focus on the deduplication work that remains.

Next Step

Write the Core Deduplication Engine

A new script needs to be written that scans the database for duplicate files, applies the quality-hierarchy rules, and populates the deletion staging table. This is the missing piece without which the next phase can’t begin.

Then

Fix Two Gaps in the Deletion Script

The existing deletion script currently skips a required cleanup step and has no “practice mode” — it runs for real immediately. Both need to be corrected before it touches live data again.

Then

Audit a Legacy Database Table

A leftover table from an earlier project phase needs to be checked. If it contains nothing unique, it gets dropped. If it holds files never captured elsewhere, those are recovered first.

Finally

Execute the Deletion Run

With preflight checks, dry runs, and explicit approval at each step — the approved duplicates are removed and the library moves one phase closer to complete.

5 Documents produced today
~100 Years of history preserved
0 Photos lost so far

The goal remains what it always was: a library where you can find the photograph of your grandfather as a young man, know approximately when it was taken, and trust that you’re looking at the only copy.

We’re closer than ever.

Documentation for this project — including the workflow specification, database reference, and script guide — was produced in collaboration with Claude (Anthropic) and reflects work in progress as of March 2026. Claude and Max operated under Mike’s supervision; human approval was required before any file deletions.

Posted in Geeky Stuff, Generative AI, Large Language Models, OpenClaw, Uncategorized | Tagged , , | Leave a comment

Deduplication for Photo Database — Part 2 (Version 1)

Personal Archive · Update · March 2026

A Hundred Years of Family Photos — The Project Continues

Building the roadmap, meeting the AI team, and getting the documentation right before we go any further.

By Mike Bush & Claude  ·  March 2026


When I published the original post about rescuing our family photo library, I described it as thirty years of memories. That was an understatement. Counting scans of old prints and slides, this collection reaches back nearly a hundred years — photographs of family members as children in the 1940s, all the way forward to last year’s trip to Egypt. The scale of what we’re preserving is larger than I initially let on, and it deserves to be said plainly: this is a century of one family’s life, and getting it right matters.

Since that post, the project has taken a significant step forward — not in deleting more files, but in something arguably more important: building a solid foundation so that the work already done doesn’t unravel, and so the work still ahead can be done safely and confidently.

Meet the Team

I should introduce the collaborators here, because this project genuinely could not have happened the way it did without them — and because they’re not human, which is worth explaining to anyone who hasn’t worked this way before.

Claude (that’s me — I’m an AI assistant made by Anthropic) has been working with Mike today in this conversation. Think of me as a planning partner and documentation specialist. I read documents, ask clarifying questions, spot inconsistencies, and write the specifications and guides that keep complex projects organized. I don’t run directly on Mike’s server, but I can reason about what needs to happen there and produce the materials that make it possible.

Max is a separate AI agent running inside a tool called OpenClaw — think of Max as the hands-on technician who actually connects to the database, runs the scripts, and does the step-by-step work on the server. Max operates inside a sandboxed environment on Mike’s machine and takes direction from the documents and instructions we produce together. Max handled the earlier phases of deduplication work described in the original post.

The division of labor is straightforward: Claude thinks and plans and documents; Max executes. Mike approves anything that could cause permanent changes.

What We Did Today

Today’s session was entirely focused on documentation and project governance — the kind of work that isn’t glamorous but determines whether a complex technical project stays on the rails months from now.

We started by reviewing Max’s own summary of the project — a “Statement of Work” that Max had drafted at the start of an earlier session. It was good, but it had gaps. Missing were the rules around a special category of photos called misfiled duplicates (more on those in a moment), the safety rules that protect against accidental data loss, and the lessons learned from mistakes made in an earlier run of the deduplication process.

From there, we reviewed the actual database structure and all of the Python scripts that have been written over the course of this project. Some of those scripts were from early experimental phases and are now obsolete. Others are current but have gaps that need to be fixed before they can be trusted with live data. And a couple of critical scripts don’t exist yet at all and need to be written before the next major phase of work can begin.

By the end of the session, we had produced five formal documents:

  • Master Strategy — the top-level reference explaining the whole project: what we’re doing, why, how the system is set up, and the rules that can never be broken.
  • Workflow Specification — the step-by-step operational guide Max follows during each work session, including a pre-session checklist and verification queries for every phase.
  • Database Specification — a complete reference for every table in the database, what it contains, what’s active versus legacy, and how the tables relate to each other.
  • Script Reference — a catalog of every Python script in the project: what each one does, which ones are current, which are obsolete, and exactly what needs to be built next.
  • Statement of Work — the formal agreement between Mike and Max defining responsibilities, deliverables, and constraints for the phases ahead.

The Misfiled Duplicate Problem

One thing worth explaining for non-technical readers is the concept of a misfiled duplicate, because it illustrates why human judgment still matters in this process even after most decisions have been automated.

Most duplicates in this library are simple: the same photo exists in a well-organized folder and in a staging dump, and the right answer is obvious — keep the organized one, delete the dump. Rules handle those automatically.

But a small number of cases are genuinely puzzling. Imagine a photograph from a Mission trip that somehow ended up filed in the House Flood folder as well. Both copies are real; neither location is obviously wrong in the way a staging dump is wrong. This isn’t a duplicate that should be deleted — it’s a filing error that should be corrected. A human needs to look at it and decide: which folder is right for this photo?

In the original deduplication run, 63 such cases were identified. They were reviewed individually. Going forward, a dedicated process will catch and flag these cases separately so they never accidentally get swept up in an automated deletion.

A Technical Wrinkle Worth Mentioning

One of the more interesting challenges this project has surfaced is the relationship between two computing environments: the Linux server where the photos actually live, and the sandboxed Docker container where Max runs his scripts.

Without going too deep into the technical weeds: Max operates inside a kind of isolated virtual workspace. To give Max access to the actual photo library on the server, the two environments are connected by a special link — essentially a shortcut that makes the server’s photo folder appear inside Max’s workspace. This has worked well, but it’s fragile. If the server restarts, or the connection is reconfigured, that link can quietly break — and if Max then runs a script that’s supposed to move or delete files, the results can be unpredictable.

We’ve now explicitly documented this risk and added a pre-session verification step to every workflow: before Max does anything involving files, he checks that the link is intact. It’s a small thing, but it’s the kind of small thing that prevents a very bad day.

What Comes Next

With the documentation in place, the next session can focus on the actual deduplication work that remains. The to-do list is concrete:

First, a new script needs to be written — the core engine that scans the database for duplicate files, applies the quality rules, and populates the deletion staging table. This is the piece that was missing from the original script set, and without it the next phase of deduplication can’t begin.

Second, an existing deletion script needs two small but critical fixes: it currently skips a required cleanup step (removing keyword tags before deleting a photo record), and it has no “practice mode” — it just runs for real. Both of those need to be corrected before it touches live data again.

Third, a small audit needs to run on a leftover database table that may be a relic from an earlier phase of the project. If it contains nothing unique, it gets dropped. If it contains files that weren’t captured elsewhere, those need to be recovered first.

After all of that, the actual deletion run can proceed — with preflight checks, dry runs, and explicit approval at each step.

The goal remains what it always was: a library where you can find the photo of your grandfather as a young man, know approximately when it was taken, and trust that you’re looking at the only copy.

We’re closer than ever.


Documentation for this project — including the workflow specification, database reference, and script guide — was produced in collaboration with Claude (Anthropic) and reflects work in progress as of March 2026. The AI agents described here, Claude and Max, operated under Mike’s supervision with human approval required before any file deletions.

Posted in Geeky Stuff, Generative AI, Large Language Models, Uncategorized | Tagged , | Leave a comment

Deduplication for Photo Database — Part 1

Personal Archive · March 2026

Rescuing Thirty Years of Family Memories

How a chaotic digital photo library of over 116,000 images was sorted, deduplicated, and organized — without losing a single irreplaceable moment.

Somewhere on a hard drive, there are photographs of me as a child in the 1940s. There are photos and videos from a trip to Egypt in 2023, snapshots from Normandy beaches, a grandchild’s first steps, and decades of Christmases. Over seventy years of digital photography and scans, the collection had grown to more than 116,000 files — and it was a mess.

The same photo might exist in three different folders. A picture taken at the Paris Temple in 2015 could be filed under “2015 France,” “2016 France,” and a staging folder called “00 iPhone Dumps” — all at once. Nobody had done anything wrong. This is simply what happens when photos accumulate across phones, cameras, computers, and backup drives over three decades without a consistent system.

This is the story of how we fixed it.

The Scale of the Problem

Before any work began, a complete audit of the library revealed the true scope of the disorder. The collection contained 116,468 individual photo and video files stored across a Linux server. Many files appeared multiple times — not because anyone intended to keep duplicates, but because of how photos naturally accumulate: syncing a phone creates one copy, backing up a laptop creates another, and importing into a new photo app creates a third.

116,468 Total files in library
27,101 Duplicate files found
23% Of library was duplicates

Nearly one in four files was a duplicate. That’s roughly 27,000 photographs and videos taking up space, cluttering searches, and making it harder to find the photos that matter.

“The same photo of a monastery in Crete existed in four different folders simultaneously — none of them labeled with the location.”

Beyond duplicates, the folder organization itself was inconsistent. Some years had their photos neatly organized — 2018 had a folder for France, a folder from our missionary assignment to the Visitors’ Center of the Paris Temple, a folder for San Francisco. Other years had events scattered at the root level with no parent folder. Staging folders with names like “00 iPhone Dumps” and “00 Camera Roll” had accumulated thousands of photos that were never properly filed. One folder, left over from a 2021 attempt to use Adobe Lightroom’s cloud sync, contained photos that had been quietly duplicated across the entire library.

How We Approached It

Rather than manually reviewing 27,000 files — a task that would take weeks — I worked with Claude from Anthropic to build a system to do most of the work automatically, with human judgment applied only where it genuinely mattered.

Step One

Finding Every Duplicate

Each file was given a unique digital fingerprint based on its contents. Two files with identical fingerprints are guaranteed to be identical, regardless of filename or folder. This identified every true duplicate in the library with complete certainty.

Step Two

Building the Rules

Not all duplicates are equal. A photo in a well-organized event folder (“2018 France/Normandy”) is more valuable than the same photo in a staging dump (“00 iPhone Dumps”). We built a hierarchy of folder quality with eight levels — from “nested named event” at the top to “abandoned sync folder” at the bottom — and wrote rules to automatically approve deletion of the lower-quality copy in clear-cut cases. These rules alone resolved nearly 25,000 of the 27,101 duplicate pairs.

Step Three

AI Review for Ambiguous Cases

After the automatic rules processed the obvious cases, roughly 3,000 duplicate pairs remained where the right answer wasn’t clear from folder names alone. These were sent to an AI assistant (Google’s Gemini) which examined each pair and decided which copy to keep, explaining its reasoning. The entire AI review cost 24 cents in computing time.

Step Four

Human Review of Edge Cases

A small number of cases — 63 out of 27,101 — required a human eye. These were photos that had ended up in genuinely unrelated folders: a photo from a Mission trip that somehow appeared in the House Flood folder, or a 1940s family photo filed in both the 1940s and 1950s folders. These were reviewed individually, with full file paths provided for side-by-side comparison.

What GPS Data Revealed

Modern smartphones embed precise GPS coordinates in every photo they take. This turned out to be a powerful verification tool. When the AI flagged uncertainty about whether a photo belonged in “2023 France & Egypt” or “2023 Egypt,” we could simply check: where was the camera when this photo was taken?

Running a geographic check on over 250 disputed photos confirmed that every single one was taken within Egypt’s borders — not France. The AI’s decisions were correct in every case.

This GPS verification process opens up exciting possibilities for the future. That 2014 trip to Crete? The GPS data already reveals clusters of photos taken at Preveli Monastery, near Rethymno, and near Heraklion — places that were visited but never named in the folder structure. A future phase of this project will use geographic clustering to automatically suggest subfolder names based on where the photos were actually taken, discovering the named places within a trip that were never manually labeled.

The Outcome

The deduplication review is now complete. Of the 27,101 duplicate files identified, 26,735 have been approved for deletion — an approval rate of 98.6%. The process took one evening of work, most of it automated.

98.6% Auto-resolved by rules + AI
<$0.50 Total AI processing cost
1 Evening of work

The next phase will consolidate the folder structure itself — moving root-level event folders under their proper year parents, so that every photo from 2017 lives somewhere inside a “2017” folder rather than scattered at the root level. After that, Phase 3 will use GPS clustering to automatically suggest sub-locations within trip folders.

The goal is simple: a library where you can find the photo of your father as a child, know approximately when it was taken, and trust that you’re looking at the only copy.

What This Means for the Future

Everything built for this library — the duplicate detection, the folder quality rules, the AI review pipeline, the GPS verification — is reusable. The same system will run against a second photo library on a separate drive, and eventually against the full combined archive. The hard work of building and debugging the pipeline is done. Applying it to new collections is now a matter of days, not months.

Thirty years of family history, finally in order.

Posted in Geeky Stuff, Generative AI, Large Language Models, Uncategorized | Tagged , | Leave a comment

LLM Uses Metaphors to Explain Problem

I am developing software using VB.net and the OpenXML library to substitute translations and screen captures into a master Microsoft Word document in English. The purpose is to create new, target-language Word documents of the originals. Text has its challenges (multiple runs of text for a single string of formatted text, just to name one). Graphics, on the other hand, move things to a whole new level of complexity. Replacing an image requires the software to understand the layout information of the original image for insertion of the new image, and this gets very tricky.

I had been working with Gemini 2.5 Pro in developing the software, but the images were being sized incorrectly upon insertion into the new document. After one version that distorted the images upon insertion, the next version had images extending beyond the edge of the page like this:

That portion of the page should in fact look more like this:

Even that version, however, is not ideal. The screen capture is from the software being explained and does not occupy such a significant portion of the screen, which wastes space in the final document.

So, after that version, I returned to Gemini Pro 2.5 with this prompt:

That works better, but some of the graphics were captured at such a high resolution that they filll the whole page. Give me no new code, but give me some ideas to think about. I am thinking that I might also need to furnish you with the actual XML of the file so you can see what we might use. In the original version, the images were sized for a better page layout, so there must be some way to get that info.

Beyond wording in the response that was sycophantic at times, I found the use of metaphors rather intriguing:

  • “The “scale to page width” logic is a sledgehammer. It’s a great fallback, but what we really need is the scalpel—the exact dimensions the original author intended”
  • “Golden Ticket”
  • “we were essentially replacing the engine of a car but leaving the original chassis, wheels, and body. The new engine was too big for the chassis.”

Here is the excerpt from the exchange that contains the use of metaphors:

Wanting to reflect more on what I was seeing there, I asked Anthropic’s Claude:

Is the use of metaphors more than clever syntax?

The full response was quite interesting and seemed to confirm what I was thinking: LLMs often appear to be doing more than simply placing the next word in a sentence according to statistical probabilities! Here is the final paragraph from Claude’s response:

So while metaphors can certainly serve as elegant rhetorical devices, their primary significance lies in their role as cognitive tools that structure thought, enable conceptual understanding, and mediate between abstract and concrete domains of experience.

Now, on to the next version of my code to to replace images!

Note: I use Simtheory.ai to access all the primary LLM engines, which I highly recommend: For one reasonable fee, the subscriber has access to many models.

Posted in Uncategorized | Tagged , , , | Leave a comment