AI Research & Reflections

✦ Stumbling through life, reality, misaligned models, and beautifully unstable trainging ✦

🧠 Research 🤖 AI Safety 💼 LinkedIn ✨ Resume 📊 Data

👋 About Me

🎯

I'm a researcher exploring the intersection of AI safety and capability. My work focuses on understanding and mitigating LLM hallucinations, improving model alignment, and advancing diffusion model techniques.

📝

This blog documents my research, projects, and daily observations from the frontiers of AI development. Join me on this journey through the complexities of artificial intelligence.

🚀 Current Projects

Hallucination Detection Tool

In Progress

Building a system to detect and flag potential hallucinations in LLM outputs.

Alignment Metrics Dashboard

In Progress

Visualizing alignment metrics across different model architectures.

Reinforcement Learning Trading Agents

Exection

Developing RL agents that can adapt to dynamic market conditions for optimized trading strategies.