Can Deep Reinforcement Learning Improve Conservation Outcomes?

Open Benchmarks for Conservation Agents

Deep reinforcement learning (RL) has proven remarkably successful at solving some difficult problems. However, so far most applications have focused on problems based on games or robotics (e.g. pictured above: Atari moonlander, a standard benchmark in OpenAI Gym, and the Robel D'Claw, a Gym-compliant, trainable robotic claw developed by UC Berkeley researchers). Public benchmarks and open source implementations of RL problems help research teams to compare results and spur innovation. Many conservation decision-making problems involve elements which make them particularly challenging, such as complex, nonlinear dynamics, tipping points, stochasticity. This project seeks to engage researchers in both Artificial Intelligence (AI) and conservation biology to develop realistic conservation environments and to train, test, and compare performance of the RL agents on those environments.

Environments

Thumbnail Image

Forest Fires

How can we best respond to the growing magnitude and frequency forest fires around the globe?

alpha version
Thumbnail Image

Overfishing

Decades of overfishing have left many fish stocks depleted worldwide. Today, effects of global warming may frustrate rebuilding of even well-managed stocks.

beta version
Thumbnail Image

Endangered Species

Should conservation efforts prioritize hotsots of species diversity, or land that is most threatened? Focus on the species at risk or most likely to benefit? There are no easy answers.

alpha version
Thumbnail Image

Climate Change

How do we balance mitigation and adaption strategies to rising sea levels?

alpha version
Thumbnail Image

Bark Beetle Devastation

Fueled by warming temperatures, bark beetles are destroying tens of thousands of square miles of forests across North America, Europe, and Siberia.

Thumbnail Image

Novel zoonotic pandemics

How best do we respond to emerging pandemics such as COVID-19?

alpha version

Agents

This section will provide links to code and documentation for trained agents on particular challenges. These examples will provide a convenient starting place for other researchers seeking to improve the performance of RL algorithms on each of the challenges included here.

Thumbnail Image

Stable Baselines V3

Examples which use PyTorch-based stable-baselines3 implementations existing published algorithms on each of the gyms

Leaderboard

This sections will track the performance of all known AI agents on each of the challenge environments. Submissions from all contestants are welcome. We only request that you share code to an open-source implementation of your agent that can be run against the benchmark environments.

Thumbnail Image
fishing leaderboard

Contribute

We welcome contributions of all forms: submit an agent, extend an existing environment or add a new one, improving the documentation or just share your ideas. This section will need links to relevant GitHub repositories, and possibly a discourse forum or similar medium.

Current Contributors

Thumbnail Image
Marcus Lapeyrolerie

Marcus is a PhD student in the Boettiger Lab at UC Berkeley and the project leader.

Thumbnail Image
Kevin Assobo

Kevin is a graduate of Paris Ensta-Tech and an early contributor during a summer internship

Thumbnail Image
Carl Boettiger

Carl is an assistant professor at UC Berkeley in the department of Environmental Science, Policy, and Management

Thumbnail Image
Millie Chapman

Millie is a PhD student in the Brashares and Boettiger labs.

Thumbnail Image
You?

Get in touch in GitHub Issues, send us a pull request, email, to tweet!