Loving the direction things are going with system-wide prompts and flows — super powerful already. I wanted to share an idea that I think could unlock even more value and flexibility, especially as we scale document ingestion and knowledge core creation.
The Idea
What if we treat flows as the place where all logic lives, Each Flow would define:
• Its own system prompt (for how the LLM should think)
• tailored extraction prompts
• tailored chunking techniques (.e.g. submitting documents with tables vs long reads vs point cloud images, ....)
This would make document ingestion/processing feel much more modular, controlled, and scalable — especially when dealing with very different types of documents (e.g., jurisdiction rulings vs. voting debriefs).
Would this makes sense according to you?