Connor Davis avatar

Connor Davis

@connordavis_ai

You don’t need GPT-5 or Claude 5...

You need better prompts.

MIT just confirmed what AI experts already knew:

Prompting drives 50% of performance.

Here’s how to level up without touching the model: 
When people upgrade to more powerful AI, they expect better results.

And yes, newer models do perform better.

But this study found a twist:

Only half the quality jump came from the model.

The rest came from how users adapted their prompts.
The researchers tested this with OpenAI’s image generators: DALL·E 2 vs DALL·E 3.

~1,900 people had to recreate target images using prompts.

Result:

DALL·E 3 beat DALL·E 2 but the biggest differentiator wasn’t just the model.

It was how users changed their prompting behavior. 
With DALL·E 3, users:

- Wrote 24% longer prompts
- Used more descriptive words
- Prompted more consistently
- Iterated more during the task
- They weren’t taught prompting.
- They learned by doing fast.

And it directly improved performance. 
The researchers isolated the improvement:

“Roughly 50% of the gains came from user adaptation, not the model itself.”

Let that sink in.

Prompting is a human skill one that grows with feedback and experimentation.
Another key finding:

Users with no technical background improved just as much.

Why?

Because great prompting isn’t about code.

It’s about clear communication being able to describe what you want with precision.

That’s trainable. 
But here’s the wild part:

When GPT-4 automatically rewrote people’s prompts…

Performance dropped 58%.

Why?

The AI changed what users meant, added fluff, or misunderstood intent.

“Helpful automation” made the output worse.
This is a warning shot for tool builders:

Don’t over-automate prompt rewriting
Hidden instructions can backfire
Let users maintain control over intent

AI should amplify user input not overwrite it.
So what’s the play for companies?

Stop just upgrading your models.

Start upgrading your people.

- Teach prompting as a skill
- Build interfaces that encourage experimentation
- Track prompt quality as a KPI

That’s how you unlock full ROI from AI tools.
People who started off worse improved the most.

Meaning:

Better models + prompt training = smaller performance gaps.

This isn’t just a productivity gain it’s a way to level the playing field.
Prompting isn't a magic trick.
It's not reserved for devs.
And it's not going away.

It’s a new form of digital literacy.

Train it. Track it. Invest in it.

It’s 50% of your AI advantage and rising.

Read the whole thing if you want:
https://arxiv.org/abs/2407.14333v2
I share AI updates here, but I build the tools at http://getoutbox.ai
 the fastest way to create your own AI voice agent without code.

Join our Skool community to learn, share, and get early access to AI voice strategies →
https://www.skool.com/outbox-ai/about
分享
探索

TweetCloner

TweetCloner 是一款適用於 X/Twitter 的創意工具,可讓您克隆任何推文或話題,將其翻譯並再創作成新內容,並在幾秒鐘內重新發布。

© 2024 TweetCloner 保留所有權利。