Skip to main navigation menu Skip to main content Skip to site footer

DARLEI: Deep Accelerated Reinforcement Learning with Evolutionary Intelligence


We present DARLEI, a GPU-accelerated framework to study the interplay between parallel reinforcement learning and morphological evolution in producing emergent ecological complexity. DARLEI harnesses Isaac Gym for scalable multi-agent simulation in rich environments, enabling new research into the dynamics between individual lifetime learning and long-term evolutionary goals. Compared to prior work requiring large distributed CPU clusters, DARLEI achieves over 20x speedup using just a single workstation. We systematically characterize DARLEI's performance under various conditions, revealing factors impacting diversity of evolved morphologies. While current implementations demonstrate limited diversity over generations, we hope future work can build on DARLEI to study mechanisms for open-ended discovery. By bringing scalable accelerated simulation to this domain, DARLEI introduces a new platform to rapidly prototype and evaluate approaches at the intersection of embodied intelligence, reinforcement learning, and evolutionary computation.