ontolearn.learners
Submodules
Package Contents
Classes
Neuro-Symbolic Class Expression Learning (https://www.ijcai.org/proceedings/2023/0403.pdf) |
|
Tree-based Description Logic Concept Learner |
- class ontolearn.learners.Drill(knowledge_base, path_embeddings: str = None, refinement_operator: LengthBasedRefinement = None, use_inverse=True, use_data_properties=True, use_card_restrictions=True, card_limit=3, use_nominals=True, quality_func: Callable = None, reward_func: object = None, batch_size=None, num_workers: int = 1, iter_bound=None, max_num_of_concepts_tested=None, verbose: int = 0, terminate_on_goal=None, max_len_replay_memory=256, epsilon_decay: float = 0.01, epsilon_min: float = 0.0, num_epochs_per_replay: int = 100, num_episodes_per_replay: int = 2, learning_rate: float = 0.001, max_runtime=None, num_of_sequential_actions=1, stop_at_goal=True, num_episode=10)[source]
Bases:
ontolearn.base_concept_learner.RefinementBasedConceptLearner
Neuro-Symbolic Class Expression Learning (https://www.ijcai.org/proceedings/2023/0403.pdf)
- initialize_training_class_expression_learning_problem(pos: FrozenSet[owlapy.owl_individual.OWLNamedIndividual], neg: FrozenSet[owlapy.owl_individual.OWLNamedIndividual]) RL_State [source]
Initialize
- rl_learning_loop(num_episode: int, pos_uri: FrozenSet[owlapy.owl_individual.OWLNamedIndividual], neg_uri: FrozenSet[owlapy.owl_individual.OWLNamedIndividual]) List[float] [source]
Reinforcement Learning Training Loop
Initialize RL environment for a given learning problem (E^+ pos_iri and E^- neg_iri )
- Training:
2.1 Obtain a trajectory: A sequence of RL states/DL concepts T, Person, (Female and
- orall hasSibling Female).
Rewards at each transition are also computed
- train(dataset: Iterable[Tuple[str, Set, Set]] | None = None, num_of_target_concepts: int = 3, num_learning_problems: int = 3)[source]
Training RL agent (1) Generate Learning Problems (2) For each learning problem, perform the RL loop
- fit(learning_problem: PosNegLPStandard, max_runtime=None)[source]
Run the concept learning algorithm according to its configuration.
Once finished, the results can be queried with the best_hypotheses function.
- fit_from_iterable(dataset: List[Tuple[object, Set[owlapy.owl_individual.OWLNamedIndividual], Set[owlapy.owl_individual.OWLNamedIndividual]]], max_runtime: int = None) List [source]
Dataset is a list of tuples where the first item is either str or OWL class expression indicating target concept.
- init_embeddings_of_examples(pos_uri: FrozenSet[owlapy.owl_individual.OWLNamedIndividual], neg_uri: FrozenSet[owlapy.owl_individual.OWLNamedIndividual])[source]
- create_rl_state(c: owlapy.class_expression.OWLClassExpression, parent_node: RL_State | None = None, is_root: bool = False) RL_State [source]
Create an RL_State instance.
- compute_quality_of_class_expression(state: RL_State) None [source]
Compute Quality of owl class expression. # (1) Perform concept retrieval # (2) Compute the quality w.r.t. (1), positive and negative examples # (3) Increment the number of tested concepts attribute.
- sequence_of_actions(root_rl_state: RL_State) Tuple[List[Tuple[RL_State, RL_State]], List[SupportsFloat]] [source]
Performing sequence of actions in an RL env whose root state is ⊤
- form_experiences(state_pairs: List, rewards: List) None [source]
Form experiences from a sequence of concepts and corresponding rewards.
state_pairs - A list of tuples containing two consecutive states. reward - A list of reward.
Gamma is 1.
Return X - A list of embeddings of current concept, next concept, positive examples, negative examples. y - Argmax Q value.
- update_search(concepts, predicted_Q_values=None)[source]
@param concepts: @param predicted_Q_values: @return:
- assign_embeddings(rl_state: RL_State) None [source]
Assign embeddings to a rl state. A rl state is represented with vector representation of all individuals belonging to a respective OWLClassExpression.
- exploration_exploitation_tradeoff(current_state: AbstractNode, next_states: List[AbstractNode]) AbstractNode [source]
Exploration vs Exploitation tradeoff at finding next state. (1) Exploration. (2) Exploitation.
- exploitation(current_state: AbstractNode, next_states: List[AbstractNode]) RL_State [source]
Find next node that is assigned with highest predicted Q value.
Predict Q values : predictions.shape => torch.Size([n, 1]) where n = len(next_states).
Find the index of max value in predictions.
Use the index to obtain next state.
Return next state.
- predict_values(current_state: RL_State, next_states: List[RL_State]) torch.Tensor [source]
Predict promise of next states given current state.
- Returns:
Predicted Q values.
- generate_learning_problems(dataset: Iterable[Tuple[str, Set, Set]] | None = None, num_of_target_concepts: int = 3, num_learning_problems: int = 5) Iterable[Tuple[str, Set, Set]] [source]
Generate learning problems if none is provided.
Time complexity: O(n^2) n = named concepts
- learn_from_illustration(sequence_of_goal_path: List[RL_State])[source]
- Parameters:
sequence_of_goal_path – ⊤,Parent,Parent ⊓ Daughter.
- best_hypotheses(n=1, return_node: bool = False) owlapy.class_expression.OWLClassExpression | List[owlapy.class_expression.OWLClassExpression] [source]
Get the current best found hypotheses according to the quality.
- Parameters:
n – Maximum number of results.
- Returns:
Iterable with hypotheses in form of search tree nodes.
- next_node_to_expand() RL_State [source]
Return a node that maximizes the heuristic function at time t.
- downward_refinement(*args, **kwargs)[source]
Execute one refinement step of a refinement based learning algorithm.
- Parameters:
node (_N) – the search tree node on which to refine.
- Returns:
Refinement results as new search tree nodes (they still need to be added to the tree).
- Return type:
Iterable[_N]
- show_search_tree(heading_step: str, top_n: int = 10) None [source]
A debugging function to print out the current search tree and the current n best found hypotheses to standard output.
- Parameters:
heading_step – A message to display at the beginning of the output.
top_n – The number of current best hypotheses to print out.
- class ontolearn.learners.TDL(knowledge_base, use_inverse: bool = False, use_data_properties: bool = False, use_nominals: bool = False, use_card_restrictions: bool = False, quality_func: Callable = None, kwargs_classifier: dict = None, max_runtime: int = 1, grid_search_over: dict = None, grid_search_apply: bool = False, report_classification: bool = False, plot_tree: bool = False, plot_embeddings: bool = False)[source]
Tree-based Description Logic Concept Learner
- create_training_data(learning_problem: PosNegLPStandard) Tuple[pandas.DataFrame, pandas.Series] [source]
Create a training data (X,y) for binary classification problem, where X is a sparse binary matrix and y is a binary vector.
X: shape (n,d) y: shape (n,1).
n denotes the number of examples d denotes the number of features extracted from n examples.
- construct_owl_expression_from_tree(X: pandas.DataFrame, y: pandas.DataFrame) List[owlapy.class_expression.OWLObjectIntersectionOf] [source]
Construct an OWL class expression from a decision tree
- fit(learning_problem: PosNegLPStandard = None, max_runtime: int = None)[source]
Fit the learner to the given learning problem
(1) Extract multi-hop information about E^+ and E^- denoted by mathcal{F}. (1.1) E = list of (E^+ sqcup E^-). (2) Build a training data mathbf{X} in mathbb{R}^{ |E| imes |\mathcal{F}| } . (3) Create binary labels mathbf{X}.
Construct a set of DL concept for each e in E^+
Union (4)
- Parameters:
learning_problem – The learning problem
:param max_runtime:total runtime of the learning