Member-only story

AI’nt That Easy #26: Cracking the Challenges of LLMs: Insights from an AI Engineer

Aakriti Aggarwal
4 min readNov 29, 2024

--

Imagine an AI tool so powerful that it not only understands your questions but offers tailored solutions across domains. Large Language Models (LLMs) are the pioneers of this revolution. Yet, like any transformative technology, LLMs face their own set of challenges. From consistency issues to language representation gaps, the journey to perfecting LLMs is riddled with fascinating obstacles.

That being said, as an AI engineer who works closely with generative AI (Gen AI), frequently building and collaborating on projects using LLMs, I am here to describe these challenges firsthand.

Let’s dive into the most pressing challenges and explore how innovators are pushing boundaries to overcome them.

The Problem of Consistency

Imagine asking an AI the same question twice and getting completely different answers. Sounds frustrating, right? This is the reality of current LLMs. Even with temperature settings set to zero, small input changes can dramatically alter outputs. It’s like having a brilliant but unpredictable assistant who can’t quite stick to a consistent script.

--

--

No responses yet