Introducing - Better Summarizer GPT
An LLM summarization technique - chain of density - that generates concept dense summaries
Improve a common task for those using ChatGPT for the first time at work or in school.
Harness the power of the latest AI research (arxiv.org Sept '23) with an advanced summarizer. Utilizing the Chain of Density approach, this custom GPT produces high-quality summaries that pack more information into fewer words without losing clarity or coherence.
Using Better Summarizer
First, Click here to Launch
Next we’ll go to the Wikipedia entry for William Shakespeare and copy the main article so we can paste it into ChatGPT in a bit. (feel free to use your own content as you follow along)
Paste in the entire article. This will start the process. It should give you output that looks something like below.
As with all generative AI, your milage may vary with zero-shot or in this case few shot through the inferred loop.
Essentially, it starts by generating an initial baseline summary.
Then, it analyzes that summary to identify any critical pieces of information that are still missing – “entities”. Armed with that knowledge gap, it generates a new, more comprehensive summary incorporating those missing bits.
This process repeats 3-5 times, with each new summary cycle building upon and refining the previous one until you're left with a rich, distilled final summary capturing all the salient points. It's like Better Summarizer is continually cross-checking its own work, pinpointing holes, and patching them up.
You can review the full chat here.
Give it a try and give the GPT a rating.
Remember after you use a GPT at least once, you can always get to it again by using the ‘@’ sign in any chat within ChatGPT.
Notes for prompt engineering
Builds on aspects of reflextion and chain of thought
Slightly agentic - since a model can read every token it has already written, it is possible for the model use that as a feedback loop between the initial and nth summary.
Limitation for agentic behavior - this use case works since the output is short and thus the model - mostly - provides its own feedback loop within the output token maximum.
Adding context makes the loop more powerful - do you want a summary of Shakespeare’s life or his literary works? or for the model to focus on his personal life?
It would be interesting to research and compare how much denser and better the end summary would be if the feedback loop was multi agent based.