RUO Principal

Repositorio Institucional de la Universidad de Oviedo

Ver ítem 
  •   RUO Principal
  • Producción Bibliográfica de UniOvi: RECOPILA
  • Artículos
  • Ver ítem
  •   RUO Principal
  • Producción Bibliográfica de UniOvi: RECOPILA
  • Artículos
  • Ver ítem
    • español
    • English
JavaScript is disabled for your browser. Some features of this site may not work without it.

Listar

Todo RUOComunidades y ColeccionesPor fecha de publicaciónAutoresTítulosMateriasxmlui.ArtifactBrowser.Navigation.browse_issnPerfil de autorEsta colecciónPor fecha de publicaciónAutoresTítulosMateriasxmlui.ArtifactBrowser.Navigation.browse_issn

Mi cuenta

AccederRegistro

Estadísticas

Ver Estadísticas de uso

AÑADIDO RECIENTEMENTE

Novedades
Repositorio
Cómo publicar
Recursos
FAQs

Multilabel classifiers with a probabilistic thresholding strategy

Autor(es) y otros:
Quevedo Pérez, José RamónAutoridad Uniovi; Luaces Rodríguez, ÓscarAutoridad Uniovi; Bahamonde Rionda, AntonioAutoridad Uniovi
Palabra(s) clave:

Multilabel classification

Thresholding strategies

Expected loss

Fecha de publicación:
2012
Editorial:

Elsevier

Versión del editor:
http://dx.doi.org/10.1016/j.patcog.2011.08.007
Citación:
Pattern Recognition, 45(2), p.876-883 (2012); doi:10.1016/j.patcog.2011.08.007
Resumen:

In multilabel classification tasks the aim is to find hypotheses able to predict, for each instance, a set of classes or labels rather than a single one. Some state-of-the-art multilabel learners use a thresholding strategy, which consists in computing a score for each label and then predicting the set of labels whose score is higher than a given threshold. When this score is the estimated posterior probability, the selected threshold is typically 0.5. In this paper we introduce a family of thresholding strategies which take into account the posterior probability of all possible labels to determine a different threshold for each instance. Thus, we exploit some kind of interdependence among labels to compute this threshold, which is optimal regarding a given expected loss function. We found experimentally that these strategies outperform other thresholding options for multilabel classification. They provide an efficient method to implement a learner which considers the interdependence among labels in the sense that the overall performance of the prediction of a set of labels prevails over that of each single label

In multilabel classification tasks the aim is to find hypotheses able to predict, for each instance, a set of classes or labels rather than a single one. Some state-of-the-art multilabel learners use a thresholding strategy, which consists in computing a score for each label and then predicting the set of labels whose score is higher than a given threshold. When this score is the estimated posterior probability, the selected threshold is typically 0.5. In this paper we introduce a family of thresholding strategies which take into account the posterior probability of all possible labels to determine a different threshold for each instance. Thus, we exploit some kind of interdependence among labels to compute this threshold, which is optimal regarding a given expected loss function. We found experimentally that these strategies outperform other thresholding options for multilabel classification. They provide an efficient method to implement a learner which considers the interdependence among labels in the sense that the overall performance of the prediction of a set of labels prevails over that of each single label

URI:
http://hdl.handle.net/10651/6203
ISSN:
0031-3203
Identificador local:

20111125

DOI:
10.1016/j.patcog.2011.08.007
Colecciones
  • Artículos [37532]
  • Informática [872]
Ficheros en el ítem
Thumbnail
untranslated
multilabel-pr.pdf (440.3Kb)
Métricas
Compartir
Exportar a Mendeley
Estadísticas de uso
Estadísticas de uso
Metadatos
Mostrar el registro completo del ítem
Página principal Uniovi

Biblioteca

Contacto

Facebook Universidad de OviedoTwitter Universidad de Oviedo
El contenido del Repositorio, a menos que se indique lo contrario, está protegido con una licencia Creative Commons: Attribution-NonCommercial-NoDerivatives 4.0 Internacional
Creative Commons Image