Skip to content

[FEATURE] Extract RubyLLM::Prompt from Agent's existing prompt rendering internals #667

@kryzhovnik

Description

@kryzhovnik

Scope check

  • This is core LLM communication (not application logic)
  • This benefits most users (not just my use case)
  • This can't be solved in application code with current RubyLLM
  • I read the Contributing Guide

Due diligence

  • I searched existing issues
  • I checked the documentation

What problem does this solve?

Hesitated to open this since prompt templates were explicitly rejected in #70, but I think the situation has changed.

The Agent work that landed since then introduced what #70 was asking for: ERB file conventions in app/prompts/, path resolution, rendering with locals, and a dedicated PromptNotFoundError. There's a docs section called "Prompt Management and Conventions."

The difference now is that this machinery is private to Agent. If you want to render a prompt following the gem's own convention outside of an Agent subclass (e.g. for a plain RubyLLM.chat call), you reimplement the path + ERB logic yourself.

Proposed solution

Extract a small RubyLLM::Prompt class from Agent's existing private methods:

RubyLLM::Prompt.render('friend', name: 'Andrey')
# => renders app/prompts/friend.txt.erb with locals

Just root, path construction, and ERB#result_with_hash. Agent keeps owning its class-name scoping and runtime context resolution, delegating the file rendering down to Prompt.

No new capabilities — same convention, just accessible without subclassing Agent.

Why this belongs in RubyLLM

It's already in RubyLLM — just locked behind Agent's private methods. This is a refactoring, not a new feature.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions