Last week, we covered the basic RAG setup and had some fun answering questions from War and Peace. Today, we move on to the two-novel cases:
- Answer questions using passages from two novels and compare them
- Compare passages from two novels based on an unseen question
- Compare passages from two novels based on a similarity search
When I first challenged ChatGPT to “compare common tropes and plot devices between War and Peace and Middlemarch,” I had a specific one in mind. Both novels are set in the early nineteenth century, a time when virtually all wealth was inherited – and the non-wealthy were basically slaves. So, everyone is trying either to marry into a rich family or suck up to a wealthy relative.
This trope is common enough that you could have seen it anywhere: our hero stands to inherit a fortune, but there is some intrigue around the dead uncle’s will. Maybe there’s a different version known only to the servants, etc. I put this question to RAG by gathering matching passages from both novels. The typical prompt template would be something like:
“Please answer this question {question} based on these passages from the novel War and Peace {context1} and these passages from the novel Middlemarch {context2}”
What we are looking for, though, is something a little more autonomous, so I simply removed the query text from the prompt template:
“Please compare these two novels with reference to the context provided … passages from the novel War and Peace {context1} and passages from the novel Middlemarch {context2}”
So, the retriever knows what question is to be answered, but ChatGPT is only asked to “compare” and draw its own conclusions. The results are quite good. I won’t share the whole response, but here is the concluding paragraph:
Overall, while both novels touch on similar themes of inheritance and wills, they do so in distinct ways that reflect the respective societies and characters depicted in each work. Middlemarch delves into the personal and familial implications of inheritance, with a focus on individual motivations and moral dilemmas, while War and Peace explores the broader societal and political consequences of inheritance, with a tone that is more ironic and comedic.
Here we see ChatGPT contrasting the tone of the two samples, and drawing a thematic inference. You can even fish for likely tropes, like “Are there instances of women being unfaithful?”
RAG with Unguided Retrieval
Finally, to make RAG fully autonomous, we must dispense with the guiding hand of the query text, and set it loose using only cosine similarity. This script simply trundles through both databases, using Chroma’s query by vector to find similar passages. There is no need to create any new embeddings.
Once a cross-novel match is found, the script retrieves n_results similar passages on each side, and then passes the unguided “compare” prompt to Open AI.
Between War and Peace and Middlemarch, it settles on some grim material about “empathy and compassion in a time of hardship.” I didn’t feel like quoting it so, instead, I tried another novel.
It took me about ten minutes to download, parse, and add Vanity Fair to the mix. In common with War and Peace, it has its own Napoleonic war (different campaign) and a great opportunity for guided search: “Is one or more of the protagonists killed in action?”
The unguided search script, predictably, finds the war. Tolstoy treats the war from a historical perspective while, for Thackeray, it’s just the backdrop for his drama.
On the other hand, War and Peace takes a broader perspective, examining the larger geopolitical forces at play during the Napoleonic Wars. The novel delves into the complexities of international relations, diplomacy, and military strategy, showing how the actions of monarchs, diplomats, and military leaders shape the course of history. While War and Peace also portrays the impact of war on individuals, it does so within the context of larger historical forces and political developments.
The histogram above shows the distribution of cosine similarity results across all 4.8 million pairs of chunks between Middlemarch and War and Peace for ada-002 and 3-small. These are both 1,536-dimensional embeddings. I also experimented with 3-large.
Initially, I preferred ada-002 because it was easier for the script to find similar passages. After working for a while with both, and seeing the histogram, I think maybe a wider variance is better. It means that nearby passages really are similar, while those that aren’t are farther apart.
For instance, 3-small gives a better answer on Natasha’s engagement because it’s more discriminating. Because I’ve read the novel (twice) I can infer where the search is going wrong. Also, I wrote a little utility function that displays which chunks it has found, with their distance metrics and metadata.
