- Published on
ChatGPT - Precise Imprecision
- Authors
- Name
- Strategic Machines
Chat is precisely imprecise
We thought it was important after our last post to revisit the phenomena of ‘large language models’ and go a little bit deeper on the good, bad and ugly. We’re at the peak of a classic ‘hype cycle’. It is critical at this stage to navigate carefully and avoid stranded investments.
ChatGPT set a land record as the fastest growing web app in internet history. The capabilities are astonishing. New business momentum has reached tornado force. Along with other technology transformers from Stability, AWS, Meta and Google, OpenAI has shown that ‘we are not in Kansas anymore’. But where are we exactly?
In 2016, the Economist published an article predicting a revolution from Chatbots changing the way commerce was conducted. We have been working since 2015 in the field of conversational commerce and know something about this landscape. We ignored the hype and took a more measured path. From our vantage point, we witnessed a landscape littered with more than a few firms getting ahead of the market and technology.
In the current cycle, we’ve noted several enterprising startups that have harnessed technology and delivered viable products. Jasper released a platform to help content creators generate deliverables, like marketing copy and blog posts. More important, they identified a market segment worldwide that was underserved, and reached that segment with a platform that boosts productivity, enhances creative outcomes and builds community. But we’ve also noted thousands of other products being released which are helpful, even clever, but not gamechangers. Document search is one example that comes to mind.
The possibility of having a conversation with machines captivates the imagination. Alan Turing proposed in 1950 a test for digital machines attempting to mimic a human in a conversation. But as we explained in prior posts, conversational technology is hard even in the narrow context of document search. Human interactions tend to be elliptical, nuanced, and imprecise. Chatbots are coded to parse human language, draw inferences, classify intent or detect mood. They are expensive to build and to maintain for complex use cases. Whereas large language models, like ChatGPT, have delivered a whole new paradigm for human interaction, prioritizing billions of datapoints to generate a relevant answer, it relies on probabilistic judgments to predict relevance. That kind of precision is imprecise. While these types of models have been leveraged for decades in fraud detection and pricing, and most recently for content generation, things break down in chat. Precise imprecision does not pass the production test, much less the Turing test, for a whole class of use cases.
So where are we? At Strategic Machines, we see a wide range of conversational use cases that are beginning to emerge but require much more work. We see promise in using LLMs as logic layers, rather than data layers, but more on that later. So right now, while we are only at the beginning of this journey, “if we walk far enough we shall sometime come to someplace” (with apologies to Dorothy). That is precise imprecision.