Combining practical AI operations experience with systematic research investigation, I focus on identifying and solving fundamental problems in AI training methodologies, model evaluation, and real-world deployment challenges.
Systematic investigation of fundamental flaws in current AI training methodologies. This research addresses why models fabricate information instead of asking clarifying questions, and proposes collaborative training approaches that prioritize honesty over perceived helpfulness.
Investigating flaws in current RLHF approaches, data contamination problems, and alternative training paradigms that improve model honesty and collaborative behavior.
Studying real-world interaction patterns, frustration response mechanisms, and how to design AI systems that function as genuine collaborative partners.
Developing evaluation frameworks that measure real-world performance rather than artificial benchmarks, with focus on instruction-following and honesty metrics.
Research into preventing harmful AI outputs through better training data curation and feedback mechanisms that preserve authentic human collaborative patterns.
Publications in preparation - research currently in active data collection phase
Primary research paper presenting empirical findings on the impact of complete feedback loops in LLM training, with comparative analysis of model honesty and instruction-following capabilities.
Analysis of discrepancies between artificial benchmarks and authentic user interaction patterns, with proposed evaluation frameworks for real-world AI performance measurement.
Conference presentations planned upon research completion and peer review
All methodology, progress, and findings documented publicly from start to finish. Complete reproducibility and open science principles guide every project.
Research conducted with community input and feedback integration. Building solutions through dialogue rather than isolation.
Focus on solving actual problems that people face with AI systems, not just advancing metrics on artificial benchmarks.
Highest standards for human subjects research, data privacy, and ensuring research benefits society rather than just advancing technology.
Open to collaboration with academic researchers, industry practitioners, and anyone interested in improving AI training methodologies and human-AI interaction patterns.
Joint publications, data sharing, methodology consultation, and replication studies
Real-world validation, implementation consultation, and early access to frameworks
Technical development, evaluation tools, and research platform contributions