I build AI systems that turn complex, unstructured data into answers that non-technical teams can actually use. NLP pipelines, agentic Slack bots, LLM classification at scale. That is what you are hiring for. That is what I do.
Most candidates write about being drawn to MT's global scale. Here is something more specific.
These are not side projects or coursework. Each one is in production or published.
Program staff needed daily access to attendance and enrollment data but had no SQL skills. I built an agentic system that accepts plain-English questions in Slack, converts them to validated SQL using GPT-4 and Guardrails AI, runs them against live AWS Redshift, and returns the result in under three seconds. It is in production today, used by 10+ people every day.
Researchers needed 100,000+ social media posts classified for a public health study. Manual coding was taking weeks and producing inconsistent labels. I designed an Airflow-orchestrated pipeline running GPT-4 and Mistral in parallel with ensemble voting to surface high-confidence labels, reducing classification from weeks to hours. The findings were published in a peer-reviewed journal.
Data discrepancies across 50+ enterprise client accounts were being caught hours after dashboards had already gone out. I built an automated SQL reconciliation system that runs continuous validation checks, detects anomalies, and fires targeted Slack alerts within 15 minutes with the root cause logged in the alert itself.
Not a vague "hit the ground running" promise. A concrete plan for the DBS AI team.
I am actively interviewing for the Data Scientist, AI Expert role in Columbus. Happy to walk through any of these projects in more detail or discuss the 90-day plan.