Andrejus Baranovski
Fast Large Table Extraction: Sparrow + dots.ocr to JSON
Sparrow provides table processing mode. It is optimized to handle large tables, it comes with separate template script (new templates can be easily added) to process dots.ocr markdown output into structure JSON with field mapping.
Local OCR Comparison: dots.ocr More Accurate, DeepSeek-OCR 2 Faster (Sparrow + MLX)
I run local tests with Sparrow to compare DeepSeek OCR2 and dots.ocr (by RedNote), both run on MLX-VLM in FP16 precision. Dots.ocr consistently beats DeepSeek OCR2 in accuracy, but DeepSeek OCR2 deliveres much better inference performance.
GLM-OCR vs DeepSeek OCR 2: Which One Wins at Markdown Extraction?
I compare two OCR models using real test cases: GLM OCR and DeepSeek OCR2. Both are evaluated on their ability to extract document content and convert it into well-structured Markdown. I demonstrate which model performs better and which one is faster.
Get Vision LLMs to Follow Your Rules: Prompt-Guided JSON Formatting
JSON query helps to fetch structured output with Vision LLM and extract document data. I describe how to improve such output with additional rules provided through LLM prompt. In this video I share example of number formatting, based on applied rule LLM will output values in requested format.
Vision LLM Output Control for Better OCR with Prompt Hints
I explain my approach to enforce better OCR output from vision LLMs with prompt hints. This allows to set rules for output data validation and formatting.
DeepSeek OCR Markdown Processing in Sparrow for Large Tables
I describe new functionality in Sparrow, where DeepSeek OCR is used to extract text data in markdown format and in the next step instruction LLM inference is utilized to convert data into structured JSON format. This approach helps to improve large table processing and avoid vision LLM hallucinations.
DeepSeek OCR Review
I'm testing structured data extraction with DeepSeek OCR. It works well and gives good data accuracy and performance to disrupt traditional cloud based document processing solutions.
New Ministral 3 14B vs Mistral Small 3.2 24B Review
I review data accuracy retrieval and inference speed for the new Ministral 3 14B model vs older Mistral Small 3.2 24B. Older and larger 24B model wins this time.
Structured Data Retrieval with Sparrow using OCR and Vision LLM [Improved Accuracy]
I explain improvements I'm adding into Sparrow to achieve better accuracy for structured data. I'm using a method, where I run OCR step first, then construct advanced prompt with injected OCR data. This prompt is sent along with image to Vision LLM for structured data retrieval. All this happens as part of a single pipeline.
Ollama and MLX-VLM Accuracy Review (Qwen3-VL and Mistral Small 3.2)
I was running detail tests to compare accuracy for the same models (Qwen3-VL and Mistral Small 3.2) running on Ollama and MLX-VLM (recent 0.3.7 version). MLX-VLM runs faster, but with lower accuracy. The same is valid across different models.
Comparing Qwen3-VL AI Models for OCR Task
I'm comparing the Qwen3-VL 8B BF16 and Qwen3-VL 30B Q8 models for OCR and structured data extraction tasks. Based on my findings, the quantized 30B model runs faster and with better accuracy than the 8B BF16 model, despite using more memory.
Qwen3-VL Accuracy Differences on Ollama vs MLX
I run couple of tests with structured data extraction using newest Qwen3-VL model on Mac Mini M4 Pro with 64GB. I discovered the same Qwen3-VL model with the same level of quantantization performs differently on Ollama vs. MLX. It seems model conversion step is crucial and we must evaluate model performance on different platforms before going to production.
Qwen3-VL New Models Comparison and Performance on Mac Mini M4
I run and compare newest Qwen3-VL models in Sparrow. Qwen3-VL models run fast and provide good accuracy.
Ollama Support in Sparrow and Update to Latest MLX
I explain whats new in Sparrow and what was updated in the recent version.
Ollama vs MLX Inference Speed on Mac Mini M4 Pro 64GB
MLX runs faster on first inference, but thanks to model caching or other optimizations by Ollama, second and next inference runs faster on Ollama.
Advanced Structured Data Processing in Sparrow
I added instruction and validation functionality into Sparrow. This allows to process business logic with document data directly through Sparrow query. For example, it allows to check if given fields are present in the document.
My Experience with PyCharm AI Assistant
Explaining my experience with PyCharm AI Assistant. Showing example how code changes can be reviewed one by one, before they are accepted into your codebase.
Financial Table Structure Analysis with Computer Vision
Explaining new functionality I'm implementing in Sparrow to pre-process tables with grid structure. This greatly improves table data extraction by Vision LLMs.
PaddleOCR 3.1 Setup in FastAPI
I explain how to run PaddleOCR 3.1 from FastAPI app.
Structured Data Query with Sparrow AI Agent
Sparrow comes with option to extract stuctured data with query. In this video I explain how you can define such query to fetch array and field data.


