I fully agree that marker based AR is not the proper way to do tracking. With a mobile phone, best is if you can leverage camera for "natural feature tracking" and aid the accuracy via sensor fusion. Depending upon the application / use case you may be able to use known objects as markers (think amiibo, but with visual recognition). There are different algorithms for camera based natural feature tracking, some of these freely available. Qt offers a good baseline for creating this type of application, but currently you need to do the tracking in your application.
On 05/03/2018, 23.31, "Interest on behalf of Jason H" <email@example.com on behalf of ***@gmx.com> wrote:
So QR-code based AR is kinda a joke (to me, see note) but real AR (provide visualization overlay) seems quite useful, unfortunately no phone really supports this correctly yet in a 3D VR way (It requires a dual camera config with separation = eye seperation) However the middle ground AR (sticker free scene augmentation) has some promise at this time.
I'm working on an app that isn't AR (Maybe when Qt3D is ready for phones!) but wants the same information the AR app has. You may remember some of my posts about the tilt range problem (-45 to 45 of pitch is not enough) and is jittery. Also the compass is *extremely* jittery. I can run these through an EMA, but that creates motion-sickening lag.
What is the "proper" way to get accurate phone position information? Android VR certification requires 90fs, which is an 11ms update interval.
Note: QR-code based AR is disqualified because he orientation is determined relative to the QR code, by analyzing its orientation.
Interest mailing list