Teardown of a simple idea.
Jumping on the bandwagon

I’ve been sifting through GPT related applications on Twitter and PH over the past 6 months.
I think there is an amazing amount of work thats going on in the space and regardless of the ‘winner’ in terms of which LLM comes out on top (this I predict will keep changing every 2 weeks)
I realise that what I’m thinking through is a beaten to death concept, it goes like this
- the world has a lot of unstructured data
- this data gets published in news reports, pdfs etc
- our AI will go through all of that and fetch the relevant information from it
- now with GPT you’ll be able to ask the data questions and draw visualisations from it too
But, I think this is a real application of GPT.
In my opinion this has applications in the following:
- Trading
- Asset Management
- I’m still working on this list
Current platforms for this in Finance include solutions like:
- Tracxn
- Bloomberg Terminal
- Screener
- PowerBI
- Trove Research ( I came across this recently )
The way I understand it, LLMs are
Great at
- structuring unstructured information
- summarising information
- extracting information
Good at
- connecting information
- understanding what an insight it
- using that understanding to derive an insight from provided information
Getting better at
- creating non-generic content
- specialising in a particular skill
In a sense, we’re going to see a split in content in terms of pre-LLM and post-LLM, in a similar vein to how we see carbon dating.
A Potential Product
- Setup a list of feeds to follow
- Push your private content into this
- Define what insights look like to you by providing examples ( prompting - also will have predefined prompts )
- Define what you want the output to look like
- Tag content to a feed
- Use pre-trained prompts to auto compile into info
- Ask specific questions on the data
The truth is a lot of the information that we rely on to make decisions
- does not exist publicly
- if it exists privately, you don’t necessarily have the time to compile it
- if it exists publicly, it is hard to keep track of
- if you can keep track of it, it’s not in a structured format
- even if its in a structured format, its hard to keep going back to it and keep track of where that specific information is
The way I’ve been thinking of this is
- a personal wiki manager
- with pre-trained prompts
- that constantly consumes streams of information based on your requirements ( can even be your emails, chats )
- and constantly updates pre-defined reports for you based on incoming information along with citations for updates and a changelog
- Someone can then subscribe to it as well and you can potentially charge them for it
- or you just become much better at your job
A Secondary Application
I’ve also been thinking of how lead sourcing and qualifying works in large orgs.
Companies pay a lot of money to tools like
- rocketreach
- zoominfo
to fetch information about leads they’ve already put together
But, that still leaves the work of putting together leads to sales people.
While a rolodeck is deck, it runs out of contacts at some point and it might not always have the answers.
A secondary application is one that
- helps companies put these leads together
- keeps updating this list of leads
- keeps drawing connections between the leads
- allows a sales manager or lead source head to define what leads they’re looking for with filters
- sends out reports
- syncs up with whatever CRMs you’re using
What wrong with it?
There’s a lot of holes to poke in this.
I think eventually this gets built by companies by themselves for their own analysts.
But, my bet is that with the level of movement being enabled for knowledge workers the ability to own your work even beyond an employer will become important and a key differentiator.
Another aspect to this is the level of access to proprietary information and what exactly can be a moat here.
I don’t think the LLM itself can be a moat, with the quality of open source LLMs out there, the best bet is to allow the user to pick the LLM they want and dump their data on top of it.
The content is not a moat, since you don’t own it, building a recommendation system that allows the user to pull in relevant information from elsewhere might be interesting but not really a moat.
The nuance is in
- actually setting up and using an LLM - barrier to entry is getting really low
- using multiple LLMs at the same time
- chaining prompts
- designing the output
- constantly streaming the inputs
- summarising the information and auto cleaning it
I would love to know what you think and if you or someone you know wants to discuss this in more detail, please hit me up!
Sainath