Future with AI

Future with AI

ChatGPT and LLMs have taken hold in global news. There is so much discussion about how ChatGPT can be used for everything. Concerns about job losses. Fear about how AI can be used for nefarious reasons. The need for regulation to control bad actors. Every company is pivoting to be an “AI” company. While there is a lot of hype, we’re still in a nascent phase of AI. Much will be thrown away, but I’m looking forward to a few particular future possibilities.

Adaptive Workflows. Today, we rely heavily on workflow discovery or code-based extensions of SaaS products to match the jobs we’re trying to execute. GPT-4 and OpenAI have demonstrated the ability to understand APIs and coding languages to generate new code on the fly. Imagine dynamically adjusting a website for accessibility. Adding a button to execute common workflows with one button. Integrate data sources on the fly.

Compression. For SaaS companies, data is essential. Terabytes to petabytes of data are eventually captured and stored. Some were analyzed and rolled up for future use. A lot more sitting there idle. LLMs have demonstrated that you could compress datasets into gigabytes, maybe smaller. While we’re in an age where running LLMs on your laptop or phone cannot scale, we’re already at a place where the data can fit on portable devices. An added bonus is that the compressed data allows companies to remove potentially privacy-related data.

Probabilistic. Hallucinations are a concern for LLMs. Most people expect computers to provide exact answers. Given the statistical nature of models, there are going to be moments where LLMs get it wrong, really, really, wrong. In reality, for most things, you don’t need exactness. You want to do something in the ballpark. In the best case, you have the exact thing you want. In the worst case, the output doesn’t help you, the human. Now, in medical and financial scenarios, guardrails will be necessary. For personalization or experimentation scenarios, good enough is just fine. Too much time is spent chasing 100% accuracy when it doesn’t matter.

Fine-tuning. Training an LLM model like GPT-4 is a time-consuming and expensive process. We’ll likely have foundations of models that individuals and companies can use as a base to fine-tune with specific datasets. This allows the user to develop a custom model at a much cheaper price with a broad range of capabilities.

Local. Today, LLMs are consuming GPUs in the cloud to train and infer results. As techniques improve, computing becomes more powerful, and technology becomes cheaper, we can run advanced models like LLMs on our portable devices. Latency goes away. Usage become even more conversational and spontaneous. A true copilot.

Inverted Personalization. With the possibilities listed, what if instead of each company offering some personalization service for its products, I have my own model that personalizes everything for me? What if that model can access my local data to determine the next best movie for me? Regardless of the streaming service I use. What if that model finds a medical specialist based on my medical history?

A glimmer of this happens on my iPhone and CarPlay. Depending on the time of day, Apple Maps on CarPlay will show me a potential route I could be taking. It could be going to the grocery store on Sunday. The path to drop off my kids on the weekday morning. It’s not 100% right, but more often than not, it presents the correct info. I don’t need the route, but I do appreciate the time it will take to get there due to potential traffic. I’d love to own the model making recommendations to me.

The future is going to be wild!