Member-only story

AI’nt That Easy #32: From Messy Data to Answers: Simplifying Workflows with LLMs

Aakriti Aggarwal
4 min readJan 7, 2025

--

Imagine you’re running a customer support team for a fast-growing tech company. Every day, you’re bombarded with thousands of tickets, chat logs, emails, and product reviews. Buried in this mountain of unstructured data are answers to customer problems, insights into product improvements, and trends that could drive business decisions — if only you could make sense of it all.

This is where Large Language Models (LLMs) like OpenAI’s GPT, Google’s Bard, and Meta’s LLaMA come in. But here’s the challenge: LLMs need structured and meaningful input to deliver valuable answers, and most of this data is anything but structured. The solution? A streamlined workflow that transforms raw, unstructured data into insights using embedding models, vector databases, and LLMs. Let’s explore how this works step by step!

The Workflow: From Data to Insights

Photo by Manralai

Step 1: Understanding Unstructured Data

Unstructured data is everywhere — emails, audio recordings, video transcripts, or logs. This type of data is rich in information but hard to process for machines.

The first step in the workflow is to extract meaning from this data, and that’s where embedding models step in.

--

--

No responses yet