👋 About Me
I'm a researcher exploring the intersection of AI safety and capability. My work focuses on understanding and mitigating LLM hallucinations, improving model alignment, and advancing diffusion model techniques.
This blog documents my research, projects, and daily observations from the frontiers of AI development. Join me on this journey through the complexities of artificial intelligence.
🚀 Current Projects
Hallucination Detection Tool
In ProgressBuilding a system to detect and flag potential hallucinations in LLM outputs.
Alignment Metrics Dashboard
In ProgressVisualizing alignment metrics across different model architectures.
Reinforcement Learning Trading Agents
ExectionDeveloping RL agents that can adapt to dynamic market conditions for optimized trading strategies.