:py:mod:`ontolearn.learners` ============================ .. py:module:: ontolearn.learners Submodules ---------- .. toctree:: :titlesonly: :maxdepth: 1 drill/index.rst nero/index.rst tree_learner/index.rst Package Contents ---------------- Classes ~~~~~~~ .. autoapisummary:: ontolearn.learners.Drill ontolearn.learners.TDL .. py:class:: Drill(knowledge_base, path_embeddings: str = None, refinement_operator: ontolearn.refinement_operators.LengthBasedRefinement = None, use_inverse=True, use_data_properties=True, use_card_restrictions=True, use_nominals=True, quality_func: Callable = None, reward_func: object = None, batch_size=None, num_workers: int = 1, iter_bound=None, max_num_of_concepts_tested=None, verbose: int = 1, terminate_on_goal=None, max_len_replay_memory=256, epsilon_decay: float = 0.01, epsilon_min: float = 0.0, num_epochs_per_replay: int = 2, num_episodes_per_replay: int = 2, learning_rate: float = 0.001, max_runtime=None, num_of_sequential_actions=3, stop_at_goal=True, num_episode=10) Bases: :py:obj:`ontolearn.base_concept_learner.RefinementBasedConceptLearner` Neuro-Symbolic Class Expression Learning (https://www.ijcai.org/proceedings/2023/0403.pdf) .. py:method:: initialize_training_class_expression_learning_problem(pos: FrozenSet[owlapy.owl_individual.OWLNamedIndividual], neg: FrozenSet[owlapy.owl_individual.OWLNamedIndividual]) -> ontolearn.search.RL_State Initialize .. py:method:: rl_learning_loop(num_episode: int, pos_uri: FrozenSet[owlapy.owl_individual.OWLNamedIndividual], neg_uri: FrozenSet[owlapy.owl_individual.OWLNamedIndividual]) -> List[float] Reinforcement Learning Training Loop Initialize RL environment for a given learning problem (E^+ pos_iri and E^- neg_iri ) Training: 2.1 Obtain a trajectory: A sequence of RL states/DL concepts T, Person, (Female and orall hasSibling Female). Rewards at each transition are also computed .. py:method:: train(dataset: Optional[Iterable[Tuple[str, Set, Set]]] = None, num_of_target_concepts: int = 1, num_learning_problems: int = 1) Training RL agent (1) Generate Learning Problems (2) For each learning problem, perform the RL loop .. py:method:: save(directory: str) -> None save weights of the deep Q-network .. py:method:: load(directory: str = None) -> None load weights of the deep Q-network .. py:method:: fit(learning_problem: ontolearn.learning_problem.PosNegLPStandard, max_runtime=None) Run the concept learning algorithm according to its configuration. Once finished, the results can be queried with the `best_hypotheses` function. .. py:method:: fit_from_iterable(dataset: List[Tuple[object, Set[owlapy.owl_individual.OWLNamedIndividual], Set[owlapy.owl_individual.OWLNamedIndividual]]], max_runtime: int = None) -> List Dataset is a list of tuples where the first item is either str or OWL class expression indicating target concept. .. py:method:: init_embeddings_of_examples(pos_uri: FrozenSet[owlapy.owl_individual.OWLNamedIndividual], neg_uri: FrozenSet[owlapy.owl_individual.OWLNamedIndividual]) .. py:method:: create_rl_state(c: owlapy.class_expression.OWLClassExpression, parent_node: Optional[ontolearn.search.RL_State] = None, is_root: bool = False) -> ontolearn.search.RL_State Create an RL_State instance. .. py:method:: compute_quality_of_class_expression(state: ontolearn.search.RL_State) -> None Compute Quality of owl class expression. # (1) Perform concept retrieval # (2) Compute the quality w.r.t. (1), positive and negative examples # (3) Increment the number of tested concepts attribute. .. py:method:: apply_refinement(rl_state: ontolearn.search.RL_State) -> Generator Downward refinements .. py:method:: select_next_state(current_state, next_rl_states) -> Tuple[ontolearn.search.RL_State, float] .. py:method:: sequence_of_actions(root_rl_state: ontolearn.search.RL_State) -> Tuple[List[Tuple[ontolearn.search.RL_State, ontolearn.search.RL_State]], List[SupportsFloat]] Performing sequence of actions in an RL env whose root state is ⊤ .. py:method:: form_experiences(state_pairs: List, rewards: List) -> None Form experiences from a sequence of concepts and corresponding rewards. state_pairs - A list of tuples containing two consecutive states. reward - A list of reward. Gamma is 1. Return X - A list of embeddings of current concept, next concept, positive examples, negative examples. y - Argmax Q value. .. py:method:: learn_from_replay_memory() -> None Learning by replaying memory. .. py:method:: update_search(concepts, predicted_Q_values=None) @param concepts: @param predicted_Q_values: @return: .. py:method:: get_embeddings_individuals(individuals: List[str]) -> torch.FloatTensor .. py:method:: get_individuals(rl_state: ontolearn.search.RL_State) -> List[str] .. py:method:: get_embeddings(instances) -> None .. py:method:: assign_embeddings(rl_state: ontolearn.search.RL_State) -> None Assign embeddings to a rl state. A rl state is represented with vector representation of all individuals belonging to a respective OWLClassExpression. .. py:method:: save_weights(path: str = None) -> None Save weights DQL .. py:method:: exploration_exploitation_tradeoff(current_state: ontolearn.abstracts.AbstractNode, next_states: List[ontolearn.abstracts.AbstractNode]) -> ontolearn.abstracts.AbstractNode Exploration vs Exploitation tradeoff at finding next state. (1) Exploration. (2) Exploitation. .. py:method:: exploitation(current_state: ontolearn.abstracts.AbstractNode, next_states: List[ontolearn.abstracts.AbstractNode]) -> ontolearn.search.RL_State Find next node that is assigned with highest predicted Q value. (1) Predict Q values : predictions.shape => torch.Size([n, 1]) where n = len(next_states). (2) Find the index of max value in predictions. (3) Use the index to obtain next state. (4) Return next state. .. py:method:: predict_values(current_state: ontolearn.search.RL_State, next_states: List[ontolearn.search.RL_State]) -> torch.Tensor Predict promise of next states given current state. :returns: Predicted Q values. .. py:method:: retrieve_concept_chain(rl_state: ontolearn.search.RL_State) -> List[ontolearn.search.RL_State] :staticmethod: .. py:method:: generate_learning_problems(num_of_target_concepts, num_learning_problems) -> List[Tuple[str, Set, Set]] Generate learning problems if none is provided. Time complexity: O(n^2) n = named concepts .. py:method:: learn_from_illustration(sequence_of_goal_path: List[ontolearn.search.RL_State]) :param sequence_of_goal_path: ⊤,Parent,Parent ⊓ Daughter. .. py:method:: best_hypotheses(n=1, return_node: bool = False) -> Union[owlapy.class_expression.OWLClassExpression, List[owlapy.class_expression.OWLClassExpression]] Get the current best found hypotheses according to the quality. :param n: Maximum number of results. :returns: Iterable with hypotheses in form of search tree nodes. .. py:method:: clean() Clear all states of the concept learner. .. py:method:: next_node_to_expand() -> ontolearn.search.RL_State Return a node that maximizes the heuristic function at time t. .. py:method:: downward_refinement(*args, **kwargs) Execute one refinement step of a refinement based learning algorithm. :param node: the search tree node on which to refine. :type node: _N :returns: Refinement results as new search tree nodes (they still need to be added to the tree). :rtype: Iterable[_N] .. py:method:: show_search_tree(heading_step: str, top_n: int = 10) -> None A debugging function to print out the current search tree and the current n best found hypotheses to standard output. :param heading_step: A message to display at the beginning of the output. :param top_n: The number of current best hypotheses to print out. .. py:method:: terminate_training() .. py:class:: TDL(knowledge_base, use_inverse: bool = False, use_data_properties: bool = False, use_nominals: bool = False, use_card_restrictions: bool = False, quality_func: Callable = None, kwargs_classifier: dict = None, max_runtime: int = 1, grid_search_over: dict = None, grid_search_apply: bool = False, report_classification: bool = False, plot_tree: bool = False, plot_embeddings: bool = False, verbose: int = 1) Tree-based Description Logic Concept Learner .. py:method:: create_training_data(learning_problem: ontolearn.learning_problem.PosNegLPStandard) -> Tuple[pandas.DataFrame, pandas.Series] Create a training data (X:pandas.DataFrame of (n,d) , y:pandas.Series of (n,1)) for binary class problem. n denotes the number of examples d denotes the number of features extracted from n examples. return X, y .. py:method:: construct_owl_expression_from_tree(X: pandas.DataFrame, y: pandas.DataFrame) -> List[owlapy.class_expression.OWLObjectIntersectionOf] Construct an OWL class expression from a decision tree .. py:method:: fit(learning_problem: ontolearn.learning_problem.PosNegLPStandard = None, max_runtime: int = None) Fit the learner to the given learning problem (1) Extract multi-hop information about E^+ and E^- denoted by \mathcal{F}. (1.1) E = list of (E^+ \sqcup E^-). (2) Build a training data \mathbf{X} \in \mathbb{R}^{ |E| imes |\mathcal{F}| } . (3) Create binary labels \mathbf{X}. (4) Construct a set of DL concept for each e \in E^+ (5) Union (4) :param learning_problem: The learning problem :param max_runtime:total runtime of the learning .. py:method:: best_hypotheses(n=1) -> Tuple[owlapy.class_expression.OWLClassExpression, List[owlapy.class_expression.OWLClassExpression]] Return the prediction .. py:method:: predict(X: List[owlapy.owl_individual.OWLNamedIndividual], proba=True) -> numpy.ndarray :abstractmethod: Predict the likelihoods of individuals belonging to the classes