Source code for the ECCV 2022 paper "Benchmarking Localization and Mapping for Augmented Reality".

Overview

The LaMAR Benchmark

for Localization and Mapping in Augmented Reality

This repository hosts the source code for our upcoming ECCV 2022 paper:

  • LaMAR: Benchmarking Localization and Mapping for Augmented Reality
  • Authors: Paul-Edouard Sarlin*, Mihai Dusmanu*, Johannes L. Schönberger, Pablo Speciale, Lukas Gruber, Viktor Larsson, Ondrej Miksik, and Marc Pollefeys

This pre-release contains the code required to load the data and run the evaluation. More details on the ground-truthing tools, data, and leaderboad will follow later.

Usage

Requirements:

  • Python >= 3.8
  • pycolmap installed from source (recommended) or via pip install pycolmap
  • hloc and its dependencies
  • raybender
  • pyceres
  • everything listed in requirements.txt, via python -m pip install -r requirements.txt

Running the single-frame evaluation:

python -m lamar_benchmark.run \
	--scene SCENE --ref_id map --query_id query_phone \
	--retrieval netvlad --feature sift --matcher mnn

By default, the script assumes that the data was placed in ./data/ and will write the intermediate dumps and final outputs to ./outputs/.

BibTex citation

Please consider citing our work if you use any code from this repo or ideas presented in the paper:

@inproceedings{sarlin2022lamar,
  author    = {Paul-Edouard Sarlin and
               Mihai Dusmanu and
               Johannes L. Schönberger and
               Pablo Speciale and
               Lukas Gruber and
               Viktor Larsson and
               Ondrej Miksik and
               Marc Pollefeys},
  title     = {{LaMAR: Benchmarking Localization and Mapping for Augmented Reality}},
  booktitle = {ECCV},
  year      = {2022},
}

Legal Notices

Microsoft and any contributors grant you a license to the Microsoft documentation and other content in this repository under the Creative Commons Attribution 4.0 International Public License, see the LICENSE file, and grant you a license to any code in the repository under the MIT License, see the LICENSE-CODE file.

Microsoft, Windows, Microsoft Azure and/or other Microsoft products and services referenced in the documentation may be either trademarks or registered trademarks of Microsoft in the United States and/or other countries. The licenses for this project do not grant you rights to use any Microsoft names, logos, or trademarks. Microsoft's general trademark guidelines can be found at http://go.microsoft.com/fwlink/?LinkID=254653.

Privacy information can be found at https://privacy.microsoft.com/en-us/

Microsoft and any contributors reserve all other rights, whether under their respective copyrights, patents, or trademarks, whether by implication, estoppel or otherwise.

Comments
  • Trajectories.txt file contains the gt poses ?

    Trajectories.txt file contains the gt poses ?

    Hello, for the HGE data folder, the map folder contains the mapping data to build the initial model. There is a trajectories.txt file, that contain camera poses. Are those poses the ground truth data for the map model ? Are they also in metric units ?

    opened by alexs7 4
  • 3D Laser Scans for Mapping Session

    3D Laser Scans for Mapping Session

    Hello,

    I noticed that the 3D laser scans used to obtain the ground-truth mapping poses weren't included in the evaluation data.

    Is there a plan to release these database scans (similar to InLoc), or are we expected to reconstruct a 3D map by running a mapping algorithm using the provided mapping images and poses (e.g. COLMAP MVS)?

    Thanks for the clarification.

    opened by lahavlipson 2
  • HL2 IMU Data Release

    HL2 IMU Data Release

    I didn't see any Hololens 2 IMU data in the evaluation release, did I miss it, or was the plan to include it at the time of the full release?

    Thank you.

    opened by ArmandB 1
  • [Bug Fix] update the code for newest pyceres

    [Bug Fix] update the code for newest pyceres

    Hi, @Skydes,

    Thanks for your great work!

    I found a small bug and fixed it in this PR, which is caused by the newest pyceres changes.

    Hope that works for you!

    Best, Yang

    opened by foreverYoungGitHub 1
  • gt pose error

    gt pose error

    Hello, I would like to use your great benchmark to test my program, But I just find some errors in GT pose. Like that:
    I think that trajectories.txt in each folder is the GT post, right? But poses in 'query_val_phone' is not the same as poses in 'map' even though they are the same images. So trajectories.txt in  'query_val_phone' folder is not GT pose? 
    
    截屏2022-11-29 22 20 23 截屏2022-11-29 22 20 58
    opened by zhizunhu 1
  • Request for access to the LaMAR dataset

    Request for access to the LaMAR dataset

    Wonderful project. However when I apply for LaMAR dataset through this url:https://forms.office.com/r/xxjpm10jvs, when I filled the information, nothing happend. I am not recevied any feedback in my email . I'm asking that how could get access to LaMAR dataset? Thanks for your kind help.

    opened by jackchinor 1
  • CVisG?

    CVisG?

    Hey, amazing work here! Huge project!!

    I saw the slide on CVisG and the small call-out in the README. Has CVisG been released? Or is it going to be its own repo, or included here?

    opened by pwais 0
  • Images in the current evaluation

    Images in the current evaluation

    At some timesteps in the hololens sequences there are only images for a subset of the cameras (hetlf, hetll, hetrf, hetrr). For these timesteps, are there additional images that will eventually be released?

    Also, the paper says that the query images are sampled every 1s/1m/20°, although I assume the sequences were recorded at a higher frame-rate. Will the full video sequences eventually be released? These intermediate frames could help localization during fast motion, even if they aren't used for evaluation.

    opened by lahavlipson 1
Owner
Microsoft
Open source projects and samples from Microsoft
Microsoft
[ECCV'22] The official PyTorch implementation of our ECCV 2022 paper: "AiATrack: Attention in Attention for Transformer Visual Tracking".

AiATrack The official PyTorch implementation of our ECCV 2022 paper: AiATrack: Attention in Attention for Transformer Visual Tracking Shenyuan Gao, Ch

Shenyuan Gao 60 Jan 2, 2023
Repository for ECCV 2022 paper "Learning Temporal Consistency for Source-Free Video Domain Adaptation"

Attentive Temporal Consistent Network (ATCoN) This repository is the code for the ECCV 2022 paper "Learning Temporal Consistency for Source-Free Video

null 13 Jan 2, 2023
PyTorch code for the paper Class-incremental Novel Class Discovery (ECCV 2022)

Class-incremental Novel Class Discovery (ECCV2022) This Github repository presents the PyTorch implementation for the paper Class-incremental Novel Cl

Mingxuan 46 Dec 14, 2022
Code for ECCV 2022 paper "Joint-Modal Label Denoising for Weakly-Supervised Audio-Visual Video Parsing"

Joint-Modal Label Denoising for Weakly-Supervised Audio-Visual Video Parsing Haoyue Cheng, Zhaoyang Liu, Hang Zhou, Chen Qian, Wayne Wu and Limin Wang

Multimedia Computing Group, Nanjing University 19 Nov 25, 2022
PyTorch code for the paper Class-incremental Novel Class Discovery (ECCV 2022)

Class-incremental Novel Class Discovery (ECCV2022) Class-incremental Novel Class Discovery (ECCV2022) Subhankar Roy, Mingxuan Liu, Zhun Zhong, Nicu Se

Mingxuan 17 Jul 19, 2022
Official Code of ECCV 2022 paper MS-CLIP

Learning Visual Representation from Modality-Shared Contrastive Language-Image Pre-training (MS-CLIP) This repo contains the source code of our ECCV 2

Haoxuan You 55 Dec 14, 2022
Code for ECCV 2022 paper PatchRD: Detail-Preserving Shape Completion by Learning Patch Retrieval and Deformation

PatchRD Code for ECCV 2022 paper PatchRD: Detail-Preserving Shape Completion by Learning Patch Retrieval and Deformation. (PDF will be released soon!)

Bo Sun 30 Nov 25, 2022
Official code for ECCV 2022 paper ``CT2: Colorization Transformer via Color Tokens"

CT2: Colorization Transformer via Color Tokens (official) Introduction This is the author's official PyTorch CT2 implementation. We present Colorizati

null 39 Dec 29, 2022
code for [ECCV 2022 paper] Contributions of Shape, Texture, and Color in Visual Recognition

Humanoid-Vision-Engine [ECCV 2022] Contributions of Shape, Texture, and Color in Visual Recognition Code is actively updating. Figure: Left: Contribut

Yunhao (Andy) Ge 67 Dec 24, 2022
Code for paper: Self-calibrating Photometric Stereo by Neural Inverse Rendering (ECCV 2022)

SCPS-NIR Self-calibrating Photometric Stereo by Neural Inverse Rendering Junxuan Li, and Hongdong Li. ECCV 2022. Paper We proposed a method for Photom

Junxuan Li 15 Jan 2, 2023
Code for the paper Learned Vertex Descent: A New Direction for 3D Human Model Fitting (ECCV 2022)

Learned Vertex Descent: A New Direction for 3D Human Model Fitting (ECCV 2022) [Project] [arXiv] DATA: Learned Vertex Descent works with the parametri

Enric Corona 116 Dec 27, 2022
Source code for paper "A Two-Stage Graph-Based Method for Chinese AMR Parsing with Explicit Word Alignment" @ CAMRP-2022 & CCL-2022

两阶段中文AMR解析方法 中文 | English 论文 "A Two-Stage Graph-Based Method for Chinese AMR Parsing with Explicit Word Alignment" @ CAMRP-2022 & CCL-2022 的模型及训练代码。 我

Chen Liang 6 Nov 8, 2022
Next-generation Video instance recognition framework on top of Detectron2 which supports SeqFormer(ECCV Oral) and IDOL(ECCV Oral))

VNext: VNext is a Next-generation Video instance recognition framework on top of Detectron2. Currently it provides advanced online and offline video i

Junfeng Wu 470 Dec 30, 2022
Official Pytorch implementation of the ECCV 2022 paper "GIPSO: Geometrically Informed Propagation for Online Adaptation in 3D LiDAR Segmentation"

GIPSO: Geometrically Informed Propagation for Online Adaptation in 3D LiDAR Segmentation [ECCV2022] The official implementation of our work "GIPSO: Ge

Cristiano Saltori 29 Dec 26, 2022
Official PyTorch implementation of the ECCV 2022 paper "CoSMix: Compositional Semantic Mix for Domain Adaptation in 3D LiDAR Segmentation"

CoSMix: Compositional Semantic Mix for Domain Adaptation in 3D LiDAR Segmentation [ECCV2022] The official implementation of our work "CoSMix: Composit

Cristiano Saltori 31 Dec 13, 2022
Pytorch implementation of paper "DynaST: Dynamic Sparse Transformer for Exemplar-Guided Image Generation", ECCV 2022.

DynaST This is the pytorch implementation of the following ECCV 2022 paper: DynaST: Dynamic Sparse Transformer for Exemplar-Guided Image Generation So

null 29 Dec 15, 2022
Official implementation for ECCV 2022 paper "Disentangling Object Motion and Occlusion for Unsupervised Multi-frame Monocular Depth"

Disentangling Object Motion and Occlusion for Unsupervised Multi-frame Monocular Depth This paper has been accepted by ECCV 2022 By Ziyue Feng, Liang

AutoAI Lab @ Clemson University 72 Dec 12, 2022
Repo for our ECCV 2022 paper on "Paint2Pix: Interactive Painting based Progressive Image Synthesis and Editing"

Paint2Pix: Interactive Painting based Progressive Image Synthesis and Editing (ECCV 2022) Controllable image synthesis with user scribbles is a topic

Jaskirat Singh 99 Dec 14, 2022
An official implementation of ECCV 2022 paper "Attention Diversification for Domain Generalization".

Attention Diversification for Domain Generalization This repo is the official implementation of ECCV2022 paper "Attention Diversification for Domain G

Hikvision Research Institute 26 Dec 25, 2022