This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

proofread

Neural networks model improves machine vision and object detection under low-light conditions

Machine vision under low-light conditions improved
Graphical abstract. Credit: Image and Vision Computing (2024). DOI: 10.1016/j.imavis.2024.105313

When designing a robot, such as Boston Dynamics' anthropomorphic robot Atlas, which appears exercising and sorting boxes, fiducial markers are the guides that help them move, detect objects and determine their exact location. It is a machine vision tool that is used to estimate objects' positions. At first glance they are flat, high-contrast black and white square codes, roughly resembling the QR marking system, but with an advantage: they can be detected at much greater distances.

In terms of logistics, a camera on the roof makes it possible to identify the location of a package in an automated way using these markers, saving time and money. Until now, the system's weakness was lighting conditions, as classic machine techniques that accurately locate and decode markers fail under low-light situations.

To address this problem, researchers Rafael Berral, Rafael Muñoz, Rafael Medina and Manuel J. Marín, with the Machine Vision Applications research group at the University of Cordoba, have developed a system that is able, for the first time, to detect and decode fiducial markers under difficult lighting conditions, using neural networks. The paper is published in the journal Image and Vision Computing.

"The use of in the allows us to detect this type of marker in a more flexible way, solving the problem of lighting for all phases of the detection and decoding process," explained researcher Berral. The entire process is comprised of three steps: marker detection, corner refinement, and marker decoding, each based on a different neural network.

This is the first time that a complete solution has been given to this problem, since, as Manuel J. Marín points out, "there have been many attempts to, under situations of optimal lighting, increase speeds, for example, but the problem of low lighting, or many shadows, had not been completely addressed to improve the process."

Boston Dynamics' anthropomorphic robot Atlas.

How to train your machine vision model

When training this model, which presents an end-to-end solution, the team created a synthetic dataset that reliably reflects the type of lighting circumstances that can be encountered when working with a marker system without ideal conditions. Once trained, "the model was tested with real-world data, some produced here internally and others as references from other previous works," the researchers indicate.

Both the artificially generated data to train the model, and those of unfavorable lighting situations in the real world, are available on an open basis. Thus, the system could be applied today "since the code has been released and it has been made possible to test the code with any image in which fiducial markers appear," recalls Rafael Muñoz.

Thanks to this work, machine vision applications have overcome a new obstacle: moving in the dark.

More information: Rafael Berral-Soler et al, DeepArUco++: Improved detection of square fiducial markers in challenging lighting conditions, Image and Vision Computing (2024). DOI: 10.1016/j.imavis.2024.105313

Citation: Neural networks model improves machine vision and object detection under low-light conditions (2025, January 24) retrieved 27 January 2025 from https://techxplore.com/news/2025-01-neural-networks-machine-vision-conditions.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Markerless motion capture system opens up biomechanics for a wide range of fields

32 shares

Feedback to editors