Piero V.

Heterogeneous comparisons in C++ containers

Occasionally, I still work with my Intel RealSense, on my RGBD toolbox, and affine topics.

Recently, I decided to allow multiple formats for the color images (initially, I had hardcoded them to JPEG only).

Therefore, I had to modify my data structures to work with pairs of paths instead of their common stems.

The UI to add new frames to the scene lists all the valid frames once and puts them into an ordered std::set, now keyed on the path pair.

With my previous assumption on fixed formats, I could do lookups on the set to quickly check if a provided stem was valid.

After the changes, this involved a heterogeneous comparison, i.e., a comparison of different types.

The trivial way to do this is a linear search, e.g., with std::find and a lambda or a range-based for.

However, this seemed a frequent case to me, and I was curious to see if there was a way to still take advantage of the optimized algorithms provided by the containers.

Indeed, there is! But it was added only since C++14.

After implementing bool operator<(const Other &, const Key &), you can pass std::less<> as a comparator to your container instead of the default std::less<Key>.

That is a particular template specialization that was developed for this purpose. It contains an is_transparent type that enables the templated version of some methods in STL containers.

This stack overflow answer contains many details. A TL; DR is that this implementation avoids unwanted conversions that could have undesired effects (e.g., the continuous creation of temporary objects from literals).

My RGBD toy box

In the last few months, in my free time, I have been developing a small application to elaborate RGBD datasets capturing people. In particular, my goal is to create 3D scans of heads. I do not expect these models to be well-made or even usable without some processing, but I wonder if I can transform this data at least to starter models.

I first started to work with RGBD cameras during my internship at Altair. At the time, I built a small pipeline to reconstruct models based on Kinect Fusion.

Sadly, Kinect Fusion is an online method. The advantage is that it will give immediate feedback if it loses the camera tracking. But the disadvantage is that it needs a pretty powerful GPU. I have one on my desktop, but I took all my datasets using laptops.

Also, in my experience, getting a usable dataset with Kinect Fusion requires several attempts and time (the acquisition must proceed very slowly), which, generally speaking, is not always compatible with… people 😄️. They might move, or lose patience, etc etc… … [Leggi il resto]

NetStylus

Preamble

Recently I started digital sculpting, and I immediately realized that the mouse is not the best tool for this scope. As any tutorial will tell you, a drawing tablet will make you much faster and much more precise.

I do not have one, but I have a Microsoft Surface Pro and a Surface Pen. However, it is the base, not-so-powerful model: it has just a Core M3 and 4GB of RAM. It was enough to study at University, but, sadly, I cannot even think of running a 3D editor in it.

Initially, I tried Weylus, a program that allows you to control a machine (my Linux desktop, in my case) through a web browser from any device with a stylus. Being web-based, it works on any device, including iPads, and it even mirrors the screen.

However, it did not play well with the barrel button of my pen. And that button is critical for a lot of workflows.

Therefore, I decided to write my own software to do so: NetStylus.

The Win32 API for tablets

I discovered that Microsoft has liked pen input methods for years: they started the Tablet PC thing with Windows XP, before 2005!

They have several APIs and functionality, but we are interested in the one that sits at the beginning of the chain: the Real-Time Stylus interface.[Leggi il resto]

Picking voxels on the Open3D visualizer

While working on my M.S. thesis, I got to know Open3D. To me, it is basically the Swiss army knife for 3D data acquired from reality.

It offers implementations of performant algorithms, Python support to quickly change your scripts or adjust parameters to improve your results, and a visualizer to see them.

Unfortunately, this very visualizer is not very interactive. Often, I would like to pick objects in the scene and drag them or modify them. Sadly, in Python, it is not possible. But it is in C++, and I will comment on how you can implement that.

But first, I suggest you download my code, as I will refer to it. It combines the routines of the following sections to allow picking and deleting voxels.

Interact with the Mouse

Usually, when I want to do something with Open3D, I look at the examples on their Read the Docs documentation. However, they also offer a Doxygen-based one for the C++ part of the library. … [Leggi il resto]

VBO multipli con OpenGL ES 2.0

Emscripten e WebAssembly

Di recente ho iniziato un nuovo progettino, per il quale sto usando SDL e OpenGL ES 2.0.

Durante il lockdown, avevo imparato a fare qualcosa di base con OpenGL 3.3, e avevo usato sempre SDL e un po’ Bullet, ma poi avevo lasciato stare il tutto. Adesso ho ricominciato, ma con OpenGL ES 2.0 perché offre una possibilità molto interessante: con Emscripten, si può compilare il progetto per WebAssembly, e GLES 2.0 viene trasformato direttamente in WebGL.

Pur non essendo più molto appassionato di sviluppo web, questa cosa mi attrae particolarmente. Innanzitutto, pubblicando sul web si possono raggiungere più persone, anche per il solo fatto che aspettare qualche secondo che si carichi una pagina è molto meno impegnativo che far scaricare uno zip, decompattarlo, e far avviare il programma. E non parliamo nemmeno degli installanti…

Il codice può essere sviluppato praticamente in qualsiasi linguaggio (C, C++, o qualcosa che abbia un interprete/una VM scritta in questi linguaggi), ma poi si può interfacciare al JavaScript che gira nel browser. Questo vuol dire che potrei fare le parti di rendering 3D in C++, ma le parti di UI in HTML, CSS e JavaScript. Le ultime volte che ho provato a fare qualcosa in JS mi sono stufato subito, però, per fare UI, lo stack web ha pochi eguali. … [Leggi il resto]