What's new#
Large, dense CSV and Excel tables that previously failed due to token limits now process correctly. The system automatically splits oversized tables into manageable chunks during processing, then merges the structured extraction results back together seamlessly.
Why it matters#
- Dense spreadsheets with hundreds of rows/columns no longer fail
- Financial reports with extensive data tables process reliably
- Data exports from systems maintain full extraction accuracy
- No manual preprocessing required - handled automatically
Highlights#
- Automatic table splitting when token limits are approached
- Intelligent result merging preserves table relationships
- Maintains extraction accuracy across table chunks
- Transparent to the user - no configuration changes needed
How it works#
- System detects when a table would exceed token limits
- Intelligently splits table while preserving headers and context
- Processes each chunk with full context awareness
- Merges extraction results back into complete table structure
- Returns unified results as if the entire table was processed at once
How to use#
This works automatically - no changes needed to your existing code.
Previously failed scenarios that now work#
- 500+ row expense reports
- Multi-sheet Excel files with dense data tables
- CSV exports from database systems
- Financial statements with extensive line items
Status#
✅ Live now. Large table processing works automatically with existing extraction configurations.