ontolearn.metrics

Quality metrics for concept learners.

Module Contents

Classes

Recall

Recall quality function.

Precision

Precision quality function.

F1

F1-score quality function.

Accuracy

Accuracy quality function.

WeightedAccuracy

WeightedAccuracy quality function.

class ontolearn.metrics.Recall(*args, **kwargs)[source]

Bases: ontolearn.abstracts.AbstractScorer

Recall quality function.

Attribute:

name: name of the metric = ‘Recall’.

__slots__ = ()
name: Final = 'Recall'
score2(tp: int, fn: int, fp: int, tn: int) Tuple[bool, float][source]

Quality score for a coverage count.

Parameters:
  • tp – True positive count.

  • fn – False negative count.

  • fp – False positive count.

  • tn – True negative count.

Returns:

Tuple, first position indicating if the function could be applied, second position the quality value

in the range 0.0–1.0.

class ontolearn.metrics.Precision(*args, **kwargs)[source]

Bases: ontolearn.abstracts.AbstractScorer

Precision quality function.

Attribute:

name: name of the metric = ‘Precision’.

__slots__ = ()
name: Final = 'Precision'
score2(tp: int, fn: int, fp: int, tn: int) Tuple[bool, float][source]

Quality score for a coverage count.

Parameters:
  • tp – True positive count.

  • fn – False negative count.

  • fp – False positive count.

  • tn – True negative count.

Returns:

Tuple, first position indicating if the function could be applied, second position the quality value

in the range 0.0–1.0.

class ontolearn.metrics.F1(*args, **kwargs)[source]

Bases: ontolearn.abstracts.AbstractScorer

F1-score quality function.

Attribute:

name: name of the metric = ‘F1’.

__slots__ = ()
name: Final = 'F1'
score2(tp: int, fn: int, fp: int, tn: int) Tuple[bool, float][source]

Quality score for a coverage count.

Parameters:
  • tp – True positive count.

  • fn – False negative count.

  • fp – False positive count.

  • tn – True negative count.

Returns:

Tuple, first position indicating if the function could be applied, second position the quality value

in the range 0.0–1.0.

class ontolearn.metrics.Accuracy(*args, **kwargs)[source]

Bases: ontolearn.abstracts.AbstractScorer

Accuracy quality function. Accuracy is acc = (tp + tn) / (tp + tn + fp+ fn). However, Concept learning papers (e.g. Learning OWL Class expression) appear to invent their own accuracy metrics.

In OCEL => Accuracy of a concept = 1 - ( |E^+ R(C)|+ |E^- AND R(C)|) / |E|).

In CELOE => Accuracy of a concept C = 1 - ( |R(A) R(C)| + |R(C) R(A)|)/n.

  1. R(.) is the retrieval function, A is the class to describe and C in CELOE.

  2. E^+ and E^- are the positive and negative examples probided. E = E^+ OR E^- .

Attribute:

name: name of the metric = ‘Accuracy’.

__slots__ = ()
name: Final = 'Accuracy'
score2(tp: int, fn: int, fp: int, tn: int) Tuple[bool, float][source]

Quality score for a coverage count.

Parameters:
  • tp – True positive count.

  • fn – False negative count.

  • fp – False positive count.

  • tn – True negative count.

Returns:

Tuple, first position indicating if the function could be applied, second position the quality value

in the range 0.0–1.0.

class ontolearn.metrics.WeightedAccuracy(*args, **kwargs)[source]

Bases: ontolearn.abstracts.AbstractScorer

WeightedAccuracy quality function.

Attribute:

name: name of the metric = ‘WeightedAccuracy’.

__slots__ = ()
name: Final = 'WeightedAccuracy'
score2(tp: int, fn: int, fp: int, tn: int) Tuple[bool, float][source]

Quality score for a coverage count.

Parameters:
  • tp – True positive count.

  • fn – False negative count.

  • fp – False positive count.

  • tn – True negative count.

Returns:

Tuple, first position indicating if the function could be applied, second position the quality value

in the range 0.0–1.0.