Your AI is brilliant.
So why are you editing its homework?

ConversationalFilter gives you the full power of any LLM with focused, scope-aware output. Stop trimming. Start building.

Open source under MIT · Commercial license for production use · API Status

The Problem

Remember when Python HTTP was painful? Then requests made it clean. LLM responses have the same problem. We fix it the same way.

Without ConversationalFilter

You: How do I connect to a database?

LLM: Great question! Databases are fundamental
to modern software. Let me explain 8 different
database types, 12 ORMs, connection pooling
theory, sharding strategies, the history of
SQL, NoSQL vs SQL debates, and here are 47
links you didn't ask for...

(800+ words for a 7-word question)

With ConversationalFilter

You: How do I connect to a database?

LLM: For Python with SQLite:

  import sqlite3
  conn = sqlite3.connect('database.db')

Want me to dive deeper?

(Scope creep detected, trimmed to core answer)

How It Works

1

Analyze Scope

ScopeAnalyzer calculates the complexity ratio between your question and the AI's response using word count, sentence structure, and technical density.

2

Detect Creep

If the response is disproportionately complex compared to the question, the system flags it as scope creep based on your configured threshold.

3

Filter & Ask

The response is trimmed to its core answer, and a clarifying question is added so the user can request more detail if they want it.

Features

🔍

Scope Creep Detection

Measures elaboration ratio between question complexity and response complexity. Configurable thresholds.

Smart Truncation

Cuts responses at natural sentence boundaries. No mid-sentence breaks. Preserves the core answer.

👤

User Profiles

Predefined profiles like CONCISE_LEARNER or TUTORIAL_LEARNER. Create custom profiles matching your style.

🌐

REST API

Production-ready API hosted on Railway. Send a question and response, get back the filtered version.

📦

Python Package

Install with pip and integrate directly into your application. Works with any LLM provider.

🔑

License Management

Built-in license validation via Lemonsqueezy. Automatic key generation on purchase.

Simple Integration

As clean as import requests — three lines to start filtering

from conversational_filter import ConversationalFilter
from conversational_filter.user_profile import CONCISE_LEARNER

# Create a filter with your preferred style
cf = ConversationalFilter(user_profile=CONCISE_LEARNER)

# Filter any LLM response
result = cf.filter_response(
    question="How do I authenticate users?",
    response=verbose_llm_output
)

print(result.filtered_response)
# Concise, focused answer - no scope creep

print(result.clarifying_question)
# "Want me to dive deeper?"

Built For

AI Chatbot Developers

Keep your chatbot responses focused and on-topic. Prevent users from getting overwhelmed with information they didn't request.

Developer Tools

Integrate into IDE plugins, CLI tools, or coding assistants to keep AI code suggestions concise and relevant.

Education Platforms

Match response depth to the learner's level. Beginners get simple answers; advanced users get technical detail on demand.

Enterprise Applications

Enforce response quality standards across your organization's AI-powered tools with consistent filtering policies.

Pricing

Open source for personal use. Commercial license for production.

Monthly
Yearly 2 Months Free

Team

For growing teams

$499
per month
  • Up to 5 developers
  • Commercial use
  • All API features
  • Priority support
  • Team management
Get Started

Open Source

ConversationalFilter is dual-licensed. The core library is open source under MIT for personal and non-commercial use. Commercial use requires a license. Install and try it now:

pip install conversational-filter