Quantifying Shape Complexity

SHREC 2021 Track

New: The non-watertight meshes in Collection #2 are fixed. (Feb 28)

Motivation

The proposed track is designed to give a benchmark on the quantification of 3D shape complexity. We investigate the correlates of complexity perception and the possibility of using complexity for content based object retrieval. Shape complexity is an ill-defined concept. Hence, we propose to explore complexity using multiple tasks and data collections. Due to the relative nature of complexity, a linear order may not make sense. There may be multiple aspects of complexity that can be measured. For this reason, for the tasks concerning the second and the third sets, we encourage participants to provide scores in the form of a pair of scalars instead of a single one. Likewise, two alternative orders can also be submitted. Based on research on visual complexity perception [1], it seems reasonable to concentrate on two measures rather than more. If a participating method yields more than two measures, we suggest the participants to apply dimensionality reduction.

Dataset

    loading
  • Collection 1 is available in MATLAB's .MAT format here, and in .OBJ format here.
  • Collection 2 can be downloaded from here.
  • Collection 3 can be downloaded from here.
If your method works essentially with 2D data, please contact us.

Collections

  • In the first collection, we consider synthetically acquired noisy cubes and spheres.
  • The second collection pertains to abstract shapes and consists of two subcollections each composed of 25 shapes.
  • The third part consists of the categorized shapes of the Princeton Segmentation Benchmark [2].
Each of these collections can be considered as seeking a different aspect of shape complexity.

Submission

The participants are expected to submit a short description of their method (to be included in the final paper) and acquired scores via email.

The scores will be submitted as a single text file following the format specified below.

  1. Each odd line indicates the unique folder name of the subcollection.
    (such as \( \verb|c+|, \verb|c-|, \verb|s+|, \verb|s-|, \verb|collection2_1|, \verb|collection2_2|, \verb|off|\))
  2. Each even line indicates the scores of the shapes in the folder given in the previous line.
  3. For each shape, acquired score(s) are given inside parentheses with space separation, e.g. if the scores are \( m_1 \) and \( m_2 \), \( (m_1\,\,m_2) \) denotes the measurements that belong to the shape.
  4. If the method cannot yield any score for some shape this is indicated by providing empty parentheses as the score of that shape: \( () \)
  5. Scores of shapes are listed in the increasing number order.
Example
If the subcollection \(A\) consists of \( \verb|1.obj| \) and \(\verb|2.obj|\) and \(\verb|10.obj|\), the submitted file reads

\( \verb|line 1>|\quad A \)
\( \verb|line 2>|\quad (m_{1,1}\,\,m_{1,2})\,\,(m_{2,1}\,\,m_{2,2})\,\,(m_{10,1}\,\,m_{10,2}) \)

where \(m_{1,1}\) and \(m_{1,2}\) are the scores for \(\verb|1.obj|\) and so on.

Queries

The queries on the first collection are controlled experiments where each ordering task is performed on few shapes that yields a total order based on the one-parameter-noise added to the base shape. The purpose here is to explore the correlation of complexity with noise. Hence, we expect users to submit a single complexity score or a total order.
In the second and third collections, submitted scores of the participants will be compared with the ground-truth on the basis of complexity ranks assigned to each shape.

Registration

If you intend to participate in the track, please send us an email and mention your affiliation and co-authors. This helps us keep track of the participants and plan accordingly. It also allows us to send you updates about the track.


Organizers

  • M. Ferhat Arslan 1
  • Alexandros Haridis 2
  • Paul L. Rosin 3
  • Sibel Tari 1

1: Middle East Technical University, Department of Computer Engineering
2: Massachusetts Institute of Technology, Department of Architecture
3: Cardiff University, School of Computer Science & Informatics

Contact us

Schedule

The registration and submission deadlines are in AoE (Anywhere on Earth) timezone.

January 11 Track announcement & registration is open
January 31 January 25 Dataset release
February 8 February 1 Registration deadline
March 8 Participants submission deadline
March 15 Track paper submission to SHREC
April 15 SHREC: First reviews done, first stage decision on acceptance or rejection
May 15 SHREC: First revision due
June 15 SHREC: Second stage of reviews complete, decision on acceptance or rejection
June 30 SHREC: Final version submission
July 5 SHREC: Final decision on acceptance or rejection
References
[1] D. E. Berlyne, J. C. Ogilvie, and L. C. Parham. “The Dimensionality of Visual Complexity, Interestingness, and Pleasingness”. In: Canadian Journal of Psychology 22.5 (1968), pp. 376–387.
[2] Xiaobai Chen, Aleksey Golovinskiy, and Thomas Funkhouser. “A Benchmark for 3D Mesh Segmentation”. In: ACM Transactions on Graphics 28.3 (2009).



Website last updated: Feb 28 2021, 16:45 (GMT+3)