#prompt-engineering #ai #football #humor

Prompt Engineering Lessons from Finding Grotesque Footballers

What I learned about effective prompting while trying to get ChatGPT to find obscure, forgettable Serie A players

I wanted ChatGPT to help me build a satirical “dream team” of forgettable Serie A players — the kind where you say the name and people respond “who?… oh right… what a flop.”

Think Moris Carrozzieri. Steven Nkemboanza Mike Christopher Nzonzi. Players who passed through big clubs leaving zero trace.

What followed was a masterclass in iterative prompt refinement.

Attempt 1: The Naive Prompt

“Give me a list of Serie A flops from the last 15 years”

Result: Balotelli, Gabigol, Osvaldo, Adriano.

Problem: These are famous flops. They’ve been mocked to death. Everyone knows them. That’s not what I wanted.

Attempt 2: Adding Constraints

“Players who are NOT too famous, NOT already used for mockery”

Result: Cerci, Ranocchia, Nainggolan.

Problem: Still too canonical. These names spark immediate recognition. I needed players who require effort to remember.

Attempt 3: The Specificity Breakthrough

“Players who did MAX 1-2 seasons, where when you name them the reaction is ‘who?… ah right…’”

Result: Getting warmer. Names like Zukanović Ervin, Marcelo Djaló, Cristian Sapunaru.

Problem: ChatGPT kept sneaking in players like Higuaín (Milan version) and Adriano (Roma epilogue). Famous players in bad periods are not the same as obscure players.

The Final Prompt

After many iterations, I developed strict criteria:

MUST have:

  • Maximum 1-2 Serie A seasons
  • Passed through big clubs (Inter, Milan, Juve, Roma, Lazio, Napoli)
  • Left ZERO impact — few minutes, no memorable moments
  • Difficult to remember — reaction must be “who?… oh yes…”
  • Fragmented careers — infinite loans, exotic destinations
  • Paid more than they delivered

MUST NOT be:

  • Famous even if disappointing
  • Players who became national memes
  • Anyone with more than 3 relevant Serie A seasons
  • Anyone who “did something anyway”

Explicit exclusion list: Balotelli, Osvaldo, Gabigol, Adriano, Higuaín, Candela, Behrami, Dzemaili, Cerci, Ranocchia, Nainggolan…

Prompt Engineering Lessons

1. Negative constraints are as important as positive ones

Saying what you don’t want is often more effective than describing what you want. The exclusion list was crucial.

2. Examples calibrate better than descriptions

“Like Moris Carrozzieri” communicated more than paragraphs of criteria. Good examples anchor the model’s understanding.

3. Iterate on failures, don’t restart

Each “wrong” answer taught me what to exclude next. The prompt evolved through conversation, not replacement.

4. Specificity defeats defaults

ChatGPT defaults to “famous examples” because they’re statistically common in training data. You need explicit constraints to escape the obvious.

5. The test case matters

My quality check: “If the reader immediately recognizes them → wrong answer. If they need to think → correct.”

Defining success criteria made evaluation possible.

The Meta-Lesson

The hardest part wasn’t getting ChatGPT to understand football. It was getting it to understand obscurity — that I wanted the opposite of what it’s trained to surface.

Most prompting failures come from this gap: the model optimizes for what’s common, and you want what’s specific.

The solution is always the same: be painfully explicit about what you don’t want, give calibrating examples, and iterate.


> The final squad remains classified. Some wounds are too fresh.

← Back to all posts