Deep reinforcement learning (RL) has proven remarkably successful at solving some difficult problems. However, so far most applications have focused on problems based on games or robotics (e.g. pictured above: Atari moonlander, a standard benchmark in OpenAI Gym, and the Robel D'Claw, a Gym-compliant, trainable robotic claw developed by UC Berkeley researchers). Public benchmarks and open source implementations of RL problems help research teams to compare results and spur innovation. Many conservation decision-making problems involve elements which make them particularly challenging, such as complex, nonlinear dynamics, tipping points, stochasticity. This project seeks to engage researchers in both Artificial Intelligence (AI) and conservation biology to develop realistic conservation environments and to train, test, and compare performance of the RL agents on those environments.
How can we best respond to the growing magnitude and frequency forest fires around the globe?
alpha versionDecades of overfishing have left many fish stocks depleted worldwide. Today, effects of global warming may frustrate rebuilding of even well-managed stocks.
beta versionShould conservation efforts prioritize hotsots of species diversity, or land that is most threatened? Focus on the species at risk or most likely to benefit? There are no easy answers.
alpha versionHow do we balance mitigation and adaption strategies to rising sea levels?
alpha versionFueled by warming temperatures, bark beetles are destroying tens of thousands of square miles of forests across North America, Europe, and Siberia.
How best do we respond to emerging pandemics such as COVID-19?
alpha versionThis section will provide links to code and documentation for trained agents on particular challenges. These examples will provide a convenient starting place for other researchers seeking to improve the performance of RL algorithms on each of the challenges included here.
This sections will track the performance of all known AI agents on each of the challenge environments. Submissions from all contestants are welcome. We only request that you share code to an open-source implementation of your agent that can be run against the benchmark environments.
We welcome contributions of all forms: submit an agent, extend an existing environment or add a new one, improving the documentation or just share your ideas. This section will need links to relevant GitHub repositories, and possibly a discourse forum or similar medium.
Marcus is a PhD student in the Boettiger Lab at UC Berkeley and the project leader.
Kevin is a graduate of Paris Ensta-Tech and an early contributor during a summer internship
Carl is an assistant professor at UC Berkeley in the department of Environmental Science, Policy, and Management
Millie is a PhD student in the Brashares and Boettiger labs.
Get in touch in GitHub Issues, send us a pull request, email, to tweet!