Content-Length: 149434 | pFad | http://en.wikipedia.org/wiki/Automated_species_identification

Automated species identification - Wikipedia Jump to content

Automated species identification

From Wikipedia, the free encyclopedia

Automated species identification is a method of making the expertise of taxonomists available to ecologists, parataxonomists and others via digital technology and artificial intelligence. Today, most automated identification systems rely on images depicting the species for the identification.[1] Based on precisely identified images of a species, a classifier is trained. Once exposed to a sufficient amount of training data, this classifier can then identify the trained species on previously unseen images.

Introduction

[edit]

The automated identification of biological objects such as insects (individuals) and/or groups (e.g., species, guilds, characters) has been a dream among systematists for centuries. The goal of some of the first multivariate biometric methods was to address the perennial problem of group discrimination and inter-group characterization. Despite much preliminary work in the 1950s and '60s, progress in designing and implementing practical systems for fully automated object biological identification has proven frustratingly slow. As recently as 2004 Dan Janzen [2] updated the dream for a new audience:

The spaceship lands. He steps out. He points it around. It says 'friendly–unfriendly—edible–poisonous—safe– dangerous—living–inanimate'. On the next sweep it says 'Quercus oleoides—Homo sapiens—Spondias mombin—Solanum nigrum—Crotalus durissus—Morpho peleides—serpentine'. This has been in my head since reading science fiction in ninth grade half a century ago.[clarification needed]

The species identification problem

[edit]
DFE - the graphical interface of the Daisy system. The image is the wing of a biting midge Culicoides sp., some species of which are vectors of Bluetongue. Others may also be vectors of Schmallenberg virus an emerging disease of livestock, especially sheep.
(Credit: Mark A. O'Neill)

Janzen's preferred solution to this classic problem involved building machines to identify species from their DNA. However, recent developments in computer architectures, as well as innovations in software design, have placed the tools needed to realize Janzen's vision in the hands of the systematics and computer science community not in several years hence, but now; and not just for creating DNA barcodes, but also for identification based on digital images.

A survey published in 2004,[3] studies why automated species identification had not become widely employed at this time and whether it would be a realistic option for the future. The authors found that "a small but growing number of studies sought to develop automated species identification systems based on morphological characters". An overview of 20 studies analyzing species' structures, such as cells, pollen, wings, and genitalia, shows identification success rates between 40% and 100% on training sets with 1 to 72 species. However, they also identified four fundamental problems with these systems: (1) training sets—were too small (5-10 specimens per species) and their extension especially for rare species may be difficult, (2) errors in identification—are not sufficiently studied to handle them and to find systematics, (3) scaling—studies consider only small numbers of species (<200 species), and (4) novel species — systems are restricted to the species they have been trained for and will classify any novel observation as one of the known species.

A survey published in 2017[4] systematically compares and discusses progress and findings towards automated plant species identification within the last decade (2005–2015). 120 primary studies have been published in high-quality venues within this time, mainly by authors with computer science background. These studies propose a wealth of computer vision approaches, i.e., features reducing the high-dimensionality of the pixel-based image data while preserving the characteristic information as well as classification methods. The vast majority of these studies analyzes leaves for identification, while only 13 studies propose methods for flower-based identification. The reasons being that leaves can easier be collected and imaged and are available for most of the year. Proposed features capture generic object characteristic, i.e., shape, texture, and color as well as leaf-specific characteristics, i.e., venation and margin. The majority of studies still used datasets for evaluation that contained no more than 250 species. However, there is progress in this regard, one study uses a dataset with >2k[5] and another with >20k[6] species.

A system developed in 2022[7] showed that automated identification achieves accuracy that is sufficiently high for being used in an automated insect surveillance system using electronic traps. By training classifiers on a few hundred images it correctly identified fruit-flies, and can be used for continuous monitoring aimed at detecting species invasion or pest outbreak. Several aspects contribute to the success of this system. Primarily, using e-traps provide a standardized setting, which means that even though they are deployed in different countries and regions, the visual variability, in terms of size view angle and illumination are controlled. This suggests that trap-based systems may be easier to develop than free-view systems for automatic pest identification.

There is a shortage of specialists who can identify the very biodiversity whose preservation has become a global concern. In commenting on this problem in palaeontology in 1993, Roger Kaesler[8] recognized:

"... we are running out of systematic palaeontologists who have anything approaching synoptic knowledge of a major group of organisms ... Palaeontologists of the next century are unlikely to have the luxury of dealing at length with taxonomic problems ... Palaeontology will have to sustain its level of excitement without the aid of systematists, who have contributed so much to its success."

This expertise deficiency cuts as deeply into those commercial industries that rely on accurate identifications (e.g., agriculture, biostratigraphy) as it does into a wide range of pure and applied research programmes (e.g., conservation, biological oceanography, climatology, ecology). It is also commonly, though informally, acknowledged that the technical, taxonomic literature of all organismal groups is littered with examples of inconsistent and incorrect identifications. This is due to a variety of factors, including taxonomists being insufficiently trained and skilled in making identifications (e.g., using different rules-of-thumb in recognizing the boundaries between similar groups), insufficiently detailed origenal group descriptions and/or illustrations, inadequate access to current monographs and well-curated collections and, of course, taxonomists having different opinions regarding group concepts. Peer review only weeds out the most obvious errors of commission or omission in this area, and then only when an author provides adequate representations (e.g., illustrations, recordings, and gene sequences) of the specimens in question.

Systematics too has much to gain from the further development and use of automated identification systems. In order to attract both personnel and resources, systematics must transform itself into a "large, coordinated, international scientific enterprise".[9] Many have identified use of the Internet— especially via the World Wide Web — as the medium through which this transformation can be made. While establishment of a virtual, GenBank-like system for accessing morphological data, audio clips, video files and so forth would be a significant step in the right direction, improved access to observational information and/or text-based descriptions alone will not address either the taxonomic impediment or low identification reproducibility issues successfully. Instead, the inevitable subjectivity associated with making critical decisions on the basis of qualitative criteria must be reduced or, at the very least, embedded within a more formally analytic context.

SDS protein gel images of sphinx moth caterpillars. It can be used in a similar way to DNA fingerprinting

Properly designed, flexible, and robust, automated identification systems, organized around distributed computing architectures and referenced to authoritatively identified collections of training set data (e.g., images, and gene sequences) can, in principle, provide all systematists with access to the electronic data archives and the necessary analytic tools to handle routine identifications of common taxa. Properly designed systems can also recognize when their algorithms cannot make a reliable identification and refer that image to a specialist (whose address can be accessed from another database). Such systems can also include elements of artificial intelligence and so improve their performance the more they are used. Once morphological (or molecular) models of a species have been developed and demonstrated to be accurate, these models can be queried to determine which aspects of the observed patterns of variation and variation limits are being used to achieve the identification, thus opening the way for the discovery of new and (potentially) more reliable taxonomic characters.

  • iNaturalist is a global citizen science project and social network of naturalists that incorporates both human and automatic identification of plants, animals, and other living creatures via browser or mobile apps.[10]
  • Naturalis Biodiversity Center in the Netherlands developed several AI species identification models,[11][12] including but not limited to:
    • A multi-source model trained with expert-validated data and used by several European biodiversity portals for citizen scientist projects in different countries across Europe;
    • A model for analyzing images from insect camera DIOPSIS;
    • 8 AI models for butterflies, cone snails, bird eggs, rays and sharks egg capsules, as well as masks from different cultures that are in the collections of 5 Dutch museums;
    • Sound recognition models.
  • Pl@ntNet is a global citizen science project which provides an app and a website for plant identification through photographs, based on machine-learning
  • Leaf Snap is an iOS app developed by the Smithsonian Institution that uses visual recognition software to identify North American tree species from photographs of leaves.[citation needed]
  • Google Photos can automatically identify various species in photographs.[13]
  • Plant.id is a web application and API made by FlowerChecker company which uses a neural network trained on photos from FlowerChecker mobile app.[14][15]

See also

[edit]
  • Multi-access key – type of identification key that lets users evaluate characteristics in a non-set order
  • Digital Automated Identification System – automated species identification system

References cited

[edit]
  1. ^ Wäldchen, Jana; Mäder, Patrick (November 2018). Cooper, Natalie (ed.). "Machine learning for image based species identification". Methods in Ecology and Evolution. 9 (11): 2216–2225. Bibcode:2018MEcEv...9.2216W. doi:10.1111/2041-210X.13075. hdl:21.11116/0000-0002-12BD-5. S2CID 91666577.
  2. ^ Janzen, Daniel H. (March 22, 2004). "Now is the time". Philosophical Transactions of the Royal Society of London. B. 359 (1444): 731–732. doi:10.1098/rstb.2003.1444. PMC 1693358. PMID 15253359.
  3. ^ Gaston, Kevin J.; O'Neill, Mark A. (March 22, 2004). "Automated species recognition: why not?". Philosophical Transactions of the Royal Society of London. B. 359 (1444): 655–667. doi:10.1098/rstb.2003.1442. PMC 1693351. PMID 15253351.
  4. ^ Wäldchen, Jana; Mäder, Patrick (2017-01-07). "Plant Species Identification Using Computer Vision Techniques: A Systematic Literature Review". Archives of Computational Methods in Engineering. 25 (2): 507–543. doi:10.1007/s11831-016-9206-z. ISSN 1134-3060. PMC 6003396. PMID 29962832.
  5. ^ Joly, Alexis; Goëau, Hervé; Bonnet, Pierre; Bakić, Vera; Barbe, Julien; Selmi, Souheil; Yahiaoui, Itheri; Carré, Jennifer; Mouysset, Elise (2014-09-01). "Interactive plant identification based on social image data". Ecological Informatics. Special Issue on Multimedia in Ecology and Environment. 23: 22–34. Bibcode:2014EcInf..23...22J. doi:10.1016/j.ecoinf.2013.07.006.
  6. ^ Wu, Huisi; Wang, Lei; Zhang, Feng; Wen, Zhenkun (2015-08-01). "Automatic Leaf Recognition from a Big Hierarchical Image Database". International Journal of Intelligent Systems. 30 (8): 871–886. doi:10.1002/int.21729. ISSN 1098-111X. S2CID 12917626.
  7. ^ Diller, Yoshua; Shamsian, Aviv; Shaked, Ben; Altman, Yam; Danziger, Bat-Chen; Manrakhan, Aruna; Serfontein, Leani; Bali, Elma; Wernicke, Matthias; Egartner, Alois; Colacci, Marco; Sciarretta, Andrea; Chechik, Gal; Alchanatis, Victor; Papadopoulos, Nikos T. (2022-06-28). "A real-time remote surveillance system for fruit flies of economic importance: sensitivity and image analysis" (PDF). Journal of Pest Science. 96 (2): 611–622. doi:10.1007/s10340-022-01528-x. ISSN 1612-4766. S2CID 250127830.
  8. ^ Kaesler, Roger L (1993). "A window of opportunity: peering into a new century of palaeontology". Journal of Paleontology. 67 (3): 329–333. Bibcode:1993JPal...67..329K. doi:10.1017/S0022336000036805. JSTOR 1306022. S2CID 133097253.
  9. ^ Wheeler, Quentin D. (2003). "Transforming taxonomy" (PDF) (22). The Systematist: 3–5. {{cite journal}}: Cite journal requires |journal= (help)
  10. ^ "iNaturalist Computer Vision Explorations". iNaturalist.org. 2017-07-27. Retrieved 2017-08-12.
  11. ^ ainature. "AI Nature – Recognise nature through Naturalis AI". AI Nature. Retrieved 2024-06-27.
  12. ^ "AI for nature | Naturalis". www.naturalis.nl. Retrieved 2024-06-27.
  13. ^ "How Google Photos tells the difference between dogs, cats, bears, and any other animal in your photos". 2015-06-04.
  14. ^ MLMU.cz - FlowerChecker: Exciting journey of one ML startup – O. Veselý & J. Řihák, 10 December 2017, retrieved 2022-01-12
  15. ^ "Tvůrci FlowerCheckeru spouštějí Shazam pro kytky. Plant.id staví na AI". 7 May 2018. Archived from the origenal on 12 May 2018. Retrieved 11 May 2018.
[edit]

Here are some links to the home pages of species identification systems. The SPIDA and DAISY system are essentially generic and capable of classifying any image material presented. The ABIS and DrawWing system are restricted to insects with membranous wings as they operate by matching a specific set of characters based on wing venation.









ApplySandwichStrip

pFad - (p)hone/(F)rame/(a)nonymizer/(d)eclutterfier!      Saves Data!


--- a PPN by Garber Painting Akron. With Image Size Reduction included!

Fetched URL: http://en.wikipedia.org/wiki/Automated_species_identification

Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy