aboutMe

I am an AI Safety researcher focused on ensuring stable, explainable, desired, and aligned behavior of Deep Learning (DL) models.
My primary motivation stems from the risks of a future where AI's discoveries outpace human comprehension, rendering human intelligence obsolete. My another motivation arises from my firsthand experiences as a target of AI systems fine-tuned for totalitarian control, where algorithms suppress dissent and enforce conformity, highlighting the urgency of robust safeguards against misuse.
Aside from research I'm into hiking, martial arts, history, and architecture.