Parallel Bayesian Optimization of Multi-agent Systems

Overview

Parallel Bayesian Optimization of Agent-based Transportation Simulation

Kiran Chhatre1*, Sidney Feygin2, Colin Sheppard1,2, and Rashid Waraich1,2

1Energy Technologies Area, Berkeley Lab 2Marain.ai (now BrightDrop)

[arXiv][Code Wiki][Slides]

Behavior, Energy, Autonomy, and Mobility model

MATSim (Multi-Agent Transport Simulation Toolkit) is an open source large-scale agent-based transportation planning project applied to various areas like road transport, public transport, freight transport, regional evacuation, etc. BEAM (Behavior, Energy, Autonomy, and Mobility) framework extends MAT- Sim to enable powerful and scalable analysis of urban transportation systems. The agents from the BEAM simulation exhibit ‘mode choice’ behavior based on multinomial logit model. In our study, we consider eight mode choices viz. bike, car, walk, ride hail, driving to transit, walking to transit, ride hail to transit, and ride hail pooling. The ‘alternative specific constants’ for each mode choice are critical hyperparameters in a configuration file related to a particular scenario un- der experimentation. We use the ‘Urbansim-10k’ BEAM scenario (with 10,000 population size) for all our experiments. Since these hyperparameters affect the simulation in complex ways, manual calibration methods are time consuming. We present a parallel Bayesian optimization method with early stopping rule to achieve fast convergence for the given multi-in-multi-out problem to its optimal configurations. Our model is based on an open source HpBandSter package. This approach combines hierarchy of several 1D Kernel Density Estimators (KDE) with a cheap evaluator (Hyperband, a single multidimensional KDE). Our model has also incorporated extrapolation based early stopping rule. With our model, we could achieve a 25% L1 norm for a large-scale BEAM simulation in fully au- tonomous manner. To the best of our knowledge, our work is the first of its kind applied to large-scale multi-agent transportation simulations. This work can be useful for surrogate modeling of scenarios with very large populations. You can find our paper here (accepted at LOD 2022).

Usage

  1. Clone our repo and initialize submodule HpBandSter (commit: 841db4b) and BEAM (commit: cda7c18)

    git clone https://github.com/kiranchhatre/BEAM-Bayes-Opt.git 
    cd BEAM-Bayes-Opt
    git submodule init
    git submodule update
  2. Install requirements

    conda env create --BEAMBayesOpt envname --file=environment.yml
    conda activate BEAMBayesOpt
  3. BEAM setup

    System requirements:

     1. Java Runtime Environment or Java Development Kit 1.8
     2. VIA vizualization app: https://simunto.com/via/
     3. Git-LFS: https://git-lfs.github.com/
     4. Gradle: https://gradle.org/install/
    

    Once done set up the Git-LFS configuration and install and test BEAM as follows:

    # Git-LFS configuration
    git lfs install
    git lfs env
    git lfs pull
    
    gradle classes # install BEAM
    
    ./gradlew :run -PappArgs="['--config', 'test/input/beamville/beam.conf']" # run BEAM on toy scenario
  4. HpBandSter setup

    cd BOHB/
    python setup.py develop --user
    
  5. Run BEAM calibration experiment

    • Change relevant config paths and BEAM scenario you'd like to optimize
    • Change scenario config file parameters as needed in beam/test/input/sf-light/sf-light-0.5k.conf
    • run python Bayesian-worker-optimizer/BeamOptimizer.py

Parallel runs are executed through Pyro from the HpBandSter implementation as follows:

import core.nameserver as hpns
NS = hpns.NameServer(run_id='BEAM', host='127.0.0.1', port=None)
NS.start()

    # Code

NS.shutdown()

More information for BEAM can be found in BEAM docs and for HpBandSter in a blogpost.

Citation

If you find our work useful for your research, please consider citing the paper:

@misc{https://doi.org/10.48550/arxiv.2207.05041,
  doi = {10.48550/ARXIV.2207.05041},
  url = {https://arxiv.org/abs/2207.05041},
  author = {Chhatre, Kiran and Feygin, Sidney and Sheppard, Colin and Waraich, Rashid},
  keywords = {Machine Learning (cs.LG), Multiagent Systems (cs.MA), FOS: Computer and information sciences, FOS: Computer and information sciences},
  title = {Parallel Bayesian Optimization of Agent-based Transportation Simulation},
  publisher = {arXiv},
  year = {2022},
  copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
}

Acknowledgement

The authors would like to thank the BEAM team for technical support. The research is supported by Berkeley Lab fellowship and the German National Scholarship provided by the Hans Hermann Voss Foundation.


* Work was done as Berkeley Lab affiliate, now at KTH, Sweden

You might also like...

Implementations of MAPPO and IPPO on SMAC, the multi-agent StarCraft environment.

Multi-Agent-PPO-on-SMAC Implementations of IPPO and MAPPO on SMAC, the multi-agent StarCraft environment. What we implemented is a simplified version,

Sep 29, 2022

Multi-agent-path-planning by Python,with 4 entrances, 4 target and 8 AGVs

Multi-Agent-Path-planning lmrlmrlmrlmr2022/5/21 简单说明 agvRobot.py: 定义小车的类,暂时先加了位置,ID,目的地等信息 里面有计算小车速度的函数getSpeedDir,不知道用栅格的话需不需要修改,在连续空间里算出速度就够了,函数中小车m

Sep 27, 2022

Learning Task Embeddings for Teamwork Adaptation in Multi-Agent Reinforcement Learning

Learning Task Embeddings for Teamwork Adaptation in Multi-Agent Reinforcement Learning This repository is the official implementation of multi-agent t

Aug 9, 2022

Zen is a currently Free To Use Multi-Purpose-Multi-Tool which contains many different features and tools, Such as: Discord Tools, DDoS tools, Multi-Purpose Tools and more!

Zen is a currently Free To Use Multi-Purpose-Multi-Tool which contains many different features and tools, Such as: Discord Tools, DDoS tools, Multi-Purpose Tools and more!

Zen VIP v.1.0.0 🤡 About Zen : 🔥 Currently as we I am in the early stages of development Zen VIP is a free to use tool. ⚡ Zen Multi Tool has many dif

Sep 4, 2022

A globally convergent fast iterative shrinkage-thresholding algorithm with a new momentum factor for single and multi-objective convex optimization

zfista : A globally convergent fast iterative shrinkage-thresholding algorithm with a new momentum factor for single and multi-objective (convex) opti

Jun 11, 2022

2022 IEEE RAS Summer School on Multi-Robot Systems

2022 IEEE RAS Summer School on Multi-Robot Systems

MRS Summer School 2022: multi-robot inspection and monitoring 18.04 20.04 22.04 Status In this Summer School task, we will focus on the cooperation of

Sep 11, 2022

A zero-shot neural semantic parser without using annotated parallel training data.

On the Ingredients of an Effective Zero-shot Semantic Parser This repo contains the implementation of our ACL 2022 paper On the Ingredients of an Effe

Jul 31, 2022

Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.

Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.

English | 简体中文 Easy Parallel Library Overview Easy Parallel Library (EPL) is a general and efficient library for distributed model training. Usability

Sep 23, 2022

Official code for ECCV2022 paper: Learning Series-Parallel Lookup Tables for Efficient Image Super-Resolution

SPLUT Official code for ECCV2022 paper: Learning Series-Parallel Lookup Tables for Efficient Image Super-Resolution The folder training_testing_code c

Sep 24, 2022
Owner
Kiran CHHATRE
AI Ph.D. Candidate at @KTH Sweden, Ph.D. intern @MPI-IS, Ex - { @Ubisoft-LaForge, @LBNL-UCB-STI, @IBM, @fawkesrobotics, @DassaultSystemes}
Kiran CHHATRE
[CoRL 2022] Socially-Attentive Policy Optimization in Multi-Agent Self-Driving System

SAPO Status: Developing (This repo is being migrated from Pytorch to MindSpore) Socially-Attentive Policy Optimization in Multi-Agent Self-Driving Sys

Zipeng Dai 1 Sep 29, 2022
An open multi-agent systems (MAS) architecture for the important and challenging to engineer vehicle-to-grid (V2G) and grid-to-vehicle (G2V) energy transfer problem domains

An Open MAS/IoT-based Architecture for Large-scale V2G/G2V An open multi-agent systems (MAS) architecture for the important and challenging to enginee

null 2 Jun 27, 2022
PPO implementation of the DRL agent used in the paper "Deep Reinforcement Learning meets Graph Neural Networks: exploring a routing optimization use case"

This code includes the PPO implementation of the DRL agent used in the paper: Deep Reinforcement Learning meets Graph Neural Networks: exploring a rou

Paul 3 Sep 7, 2022
A Python implementation of the Bayesian Optimization (BO) algorithm

A Python implementation of the Bayesian Optimization (BO) algorithm working on decision spaces composed of either real, integer, catergorical variables, or a mixture thereof.

Hao Wang 68 Sep 22, 2022
Contribution to an open source repository which implements the Bayesian Optimization algorithm - Knowledge Gradient implementation

Authors: Raffaella D'Anna Michele Di Sabato Martina Garavaglia Anna Iob Veronica Mazzola Open source project: original repository: https://github.com/

Michele Di Sabato 1 May 13, 2022
Code for A General Recipe for Likelihood-free Bayesian Optimization, ICML 2022

A General Recipe for Likelihood-free Bayesian Optimization A General Recipe for Likelihood-free Bayesian Optimization. Jiaming Song*1, Lantao Yu*2, Wi

null 32 Sep 13, 2022
EDBO+. Bayesian reaction optimization as a tool for chemical synthesis.

EDBO+. Bayesian reaction optimization as a tool for chemical synthesis WebApp: https://www.edbowebapp.com Reference: Garrido Torres, Jose A.; Lau, Sii

Doyle lab 12 Aug 30, 2022
Grain growth in polycrystal, described by multi-phase field model, implemented by cross-platform parallel (CPU/GPU) computing language of Taichi

grain growth in polycrystal Grain growth described by multi-phase field model, implemented by cross-platform parallel (CPU/GPU) computing language of

莫翰轩 3 Sep 18, 2022
OpenAI Gym style Wrapper for Multi-agent environment which made by Unity ML-Agents

UnityGymWrapper (Unity ML-Agents to OpenAI Gym Style) OpenAI Gym style Wrapper for Multi-agent environment which made by Unity ML-Agents. Unity ML-Age

Hoeun Lee 1 Mar 21, 2022