Weight Learning in a Probabilistic Extension of Answer Set Programs
Abstract
is a probabilistic extension of answer set programs with the weight scheme derived from that of Markov Logic. Previous work has shown how inference in can be achieved. In this paper, we present the concept of weight learning in and learning algorithms for derived from those for Markov Logic. We also present a prototype implementation that uses answer set solvers for learning as well as some example domains that illustrate distinct features of learning. Learning in is in accordance with the stable model semantics, thereby it learns parameters for probabilistic extensions of knowledgerich domains where answer set programming has shown to be useful but limited to the deterministic case, such as reachability analysis and reasoning about actions in dynamic domains. We also apply the method to learn the parameters for probabilistic abductive reasoning about actions.
1 Introduction
is a probabilistic extension of answer set programs with the weight scheme derived from that of Markov Logic [Richardson and Domingos2006]. The language turns out to be highly expressive to embed several other probabilistic logic languages, such as Plog [Baral, Gelfond, and Rushton2009], ProbLog [De Raedt, Kimmig, and Toivonen2007], Markov Logic, and Causal Models [Pearl2000], as described in [Lee, Meng, and Wang2015, Lee and Wang2016, Balai and Gelfond2016, Lee and Yang2017]. Inference engines for , such as lpmln2asp, lpmln2mln [Lee, Talsania, and Wang2017], and lpmlnmodels [Wang and Zhang2017], have been developed based on the reduction of to answer set programs and Markov Logic.
The weight associated with each rule roughly asserts how important the rule is in deriving a stable model. It can be manually specified by the user, which may be okay for a simple program, but a systematic assignment of weights for a complex program could be challenging. A solution would be to learn the weights automatically from the observed data.
With this goal in mind, this paper presents the concept of weight learning in and a few learning methods for derived from learning in Markov Logic. Weight learning in is to find the weights of the rules in the program such that the likelihood of the observed data according to the semantics is maximized, which is commonly known as Maximum Likelihood Estimation (MLE) in the practice of machine learning.
In , due to the requirement of a stable model, deterministic dependencies are frequent. \citeauthorpoon06sound (\citeyearpoon06sound) noted that deterministic dependencies break the support of a probability distribution into disconnected regions, making it difficult to design ergodic Markov chains for Markov Chain Monte Carlo (MCMC) sampling, which motivated them to develop an algorithm called MCSAT that uses a satisfiability solver to find modes for computing conditional probabilities. Thanks to the close relationship between Markov Logic and , we could adapt that algorithm to , which we call MCASP. Unlike MCSAT, algorithm MCASP utilizes ASP solvers for performing MCMC sampling, and is based on the penaltybased formulation of instead of the rewardbased formulation as in Markov Logic.
Learning in is in accordance with the stable model semantics, thereby it learns parameters for probabilistic extensions of knowledgerich domains where answer set programming has shown to be useful but limited to the deterministic case, such as reachability analysis and reasoning about actions in dynamic domains. More interestingly, we demonstrate that the method can also be applied to learn parameters for abductive reasoning about dynamic systems to associate the probability learned from data with each possible reason for the failure.
The paper is organized as follows. Section 2 reviews the language , and Section 3 presents the learning framework and a gradient ascent method for the basic case, where a single stable model is given as the training data. Section 4 presents a few extensions of the learning problem and methods, such as allowing multiple stable models as the training data and allowing the training data to be an incomplete interpretation. In addition to the general learning algorithm, Section 5 relates learning also to learning in ProbLog and Markov Logic as special cases, which allows for the special cases of learning to be computed by existing implementations of ProbLog and Markov Logic. Section 6 introduces a prototype implementation of the general learning algorithm and demonstrates it with a few example domains where learning is more suitable than other learning methods.
2 Review: Language
The original definition of from [Lee and Wang2016] is based on the concept of a “reward”: the more rules are true, the larger weight is assigned to the corresponding stable model as the reward. Alternatively, \citeauthorlee17lpmln [\citeyearlee17lpmln] present a reformulation in terms of a“penalty”: the more rules are false, the smaller weight is assigned to the corresponding stable model. The advantage of the latter is that it yields a translation of programs that can be readily accepted by ASP solvers, the idea that led to the implementation of using ASP solvers [Lee, Talsania, and Wang2017]. Throughout the paper, we refer to this reformulation as the main definition of .
We assume a firstorder signature that contains no function constants of positive arity, which yields finitely many Herbrand interpretations. An program is a pair , where is a list of rules , where each rule has the form
(1) 
where is a disjunction of atoms, is a conjunction of atoms, and is a negative formula constructed from atoms using conjunction, disjunction, and negation.^{1}^{1}1For the definition of a negative formula, see [Ferraris, Lee, and Lifschitz2011]. We identify rule (1) with formula . The expression , where is an atom, denotes the rule . is a list such that each is a real number or the symbol that denotes the weight of rule in . We can also identify an program with the finite list of weighted rules A weighted rule is called soft if is a real number; it is called hard if is (which denotes infinite weight). Variables range over an Herbrand Universe, which is assumed to be finite so that the ground program is finite. For any program , by we denote the program obtained from by the process of grounding. Each resulting rule with no variables, which we call ground instance, receives the same weight as the original rule.
For any program and any interpretation , expression denotes the number of ground instances of that is false in , and denotes the set of (unweighted) formulas obtained from by dropping the weight of every rule. When has no variables, denotes the set of weighted rules in such that .
In general, an program may even have stable models that violate some hard rules, which encode definite knowledge. However, throughout the paper, we restrict attention to programs whose stable models do not violate hard rules. More precisely, given an program , denotes the set
For any interpretation , its weight and its probability are defined as follows.
where consists of all soft rules in , and
An interpretation is called a (probabilistic) stable model of if . When is nonempty, it turns out that every probabilistic stable model satisfies all hard rules, and the definitions of and above are equivalent to the original definitions [Lee and Wang2016, Proposition 2].
For any proposition , the probability of under is defined as
3 Weight Learning
3.1 General Problem Statement
A parameterized program is defined similarly to an program except that non weights (i.e., “soft” weights) are replaced with distinct parameters to be learned. By , where is a list of real numbers whose length is the same as the number of soft rules, we denote the program obtained from by replacing the parameters with . The weight learning task for a parameterized program is to find the MLE (Maximum likelihood Estimation) of the parameters as in Markov Logic. Formally, given a parameterized program and a ground formula (often in the form of conjunctions of literals) called observation or training data, the parameter learning task is to find the values of parameters such that the probability of under the program is maximized. In other words, the learning task is to find
(2) 
3.2 Gradient Method for Learning Weights From a Complete Stable Model
Same as in Markov Logic, there is no closed form solution for (2) but the gradient ascent method can be applied to find the optimal weights in an iterative manner.
We first compute the gradient. Given a (nonground) program whose is nonempty and given a stable model of , the base logarithm of , , is
The partial derivative of w.r.t. is
where is the expected number of false ground rules obtained from .
Since the loglikelihood above is a concave function of the weights, any local maximum is a global maximum, and maximizing can be done by the standard gradient ascent method by updating each weight by until it converges.^{2}^{2}2Note that although any local maximum is a global maximum for the loglikelihood function, there can be multiple combinations of weights that achieve the maximum probability of the training data.
However, similar to Markov Logic, computing is intractable [Richardson and Domingos2006]. In the next section, we turn to an MCMC sampling method to find its approximate value.
3.3 Sampling Method: MCASP
The following is an MCMC algorithm for , which adapts the algorithm MCSAT for Markov Logic [Poon and Domingos2006] by considering the penaltybased reformulation and by using an ASP solver instead of a SAT solver for sampling.
When all the weights of soft rules are nonpositive, (at step (b)) is in the range and thus it validly represents a probability. At each iteration, the sample is chosen from stable models of , and consequently, it must satisfy all hard rules. For soft rules, the higher its weight, the less likely that it will be included in , and thus less likely to be not satisfied by the sample generated from .
The following theorem states that MCASP satisfies the MCMC criteria of ergodicity and detailed balance, which justifies the soundness of the algorithm.
Theorem 1
The Markov chain generated by MCASP satisfies ergodicity and detailed balance.^{3}^{3}3 A Markov chain is ergodic if there is a number such that any state can be reached from any other state in any number of steps greater than or equal to . Detailed balance means for any samples and , where denotes the probability that the next sample is given that the current sample is .
Steps 1 and 2(c) of the algorithm require finding a probabilistic stable model of , which can be computed by system lpmln2asp [Lee, Talsania, and Wang2017]. The system is based on the translation that turns an program into an ASP program . The translation turns each (possibly nonground) soft rule
(3) 
into ^{4}^{4}4If is a disjunction of atoms , then denotes .
and each hard rule
into . System lpmln2asp turns an program into and calls ASP solver clingo to find the stable models of , which coincide with the probabilistic stable models of . The weight of a stable model can be computed from the weights recorded in atoms that are true in the stable model.
Step 2(c) also requires a uniform sampler for answer sets, which can be computed by xorro [Gebser et al.2016].
Input: : A parameterized program in the input language of lpmln2asp; : A stable model represented as a set of constraints (that is, is in if a ground atom is true; is in if is not true); : a fixed real number to be used for the terminating condition.
Output: with learned weights.
Process:

Initialize the weights of soft rules with some initial weights .

Repeat the following for until :

Compute the stable model of using lpmln2asp (see below); for each soft rule , compute by counting atoms whose first argument is ( is a rule index).

Create by replacing each soft rule of the form in where with

Run MCASP on to collect a set of sample stable models.

For each soft rule , approximate with , where is obtained from counting the number of atoms whose first argument is .

For each ,
.

Algorithm 2 is a weight learning algorithm for based on gradient ascent using MCASP (Algorithm 1) for collecting samples. Step 2(b) of MCASP requires that be nonpositive in order for to represent a probability. Unlike in the Markov Logic setting, converting positive weights into nonpositive weights cannot be done in simply by replacing with , due to the difference in the FOL and the stable model semantics. Algorithm 2 converts into an equivalent program whose rules’ weights are nonpositive, before calling MCASP. The following theorem justifies the soundness of this method.^{5}^{5}5Note that is only used in MCASP. The output of Algorithm 2 may have positive weights.
Theorem 2
When is not empty, the program specifies the same probability distribution as the program .^{6}^{6}6Nonemptiness of implies that every probabilistic stable model of satisfies all hard rules in .
4 Extensions
The base case learning in the previous section assumes that the training data is a single stable model and is a complete interpretation. This section extends the framework in a few ways.
4.1 Learning from Multiple Stable Models
The method described in the previous section allows only one stable model to be used as the training data. Now, suppose we have multiple stable models as the training data. For example, consider the parameterized program that describes a coin, which may or may not land in the head when it is flipped,
(the first rule is a choice rule) and three stable models as the training data: , , (the absence of in the answer set is understood as landing in tail), indicating that has a frequency of , and has a frequency of . Intuitively, the more we observe the , the larger the weight of the second rule. Clearly, learning from only one of won’t result in a weight that captures all the three stable models: learning from each of or results in the value of too small for to have a frequency of while learning from results in the value of too large for to have a frequency of .
To utilize the information from multiple stable models, one natural idea is to maximize the joint probability of all the stable models in the training data, which is the product of their probabilities, i.e.,
The partial derivative of w.r.t. is
In other words, the gradient of the log probability is simply the sum of the gradients of the probability of each stable model in the training data. To update Algorithm 2 to reflect this, we simply repeat step 2(a) to compute for each , and at step 2(e) update as follows:
Alternatively, learning from multiple stable models can be reduced to learning from a single stable model by introducing one more argument to every predicate, which represents the index of a stable model in the training data, and rewriting the data to include the index.
Formally, given an program and a set of its stable models , let be an program obtained from by appending one more argument to the list of arguments of every predicate that occurs in , where is a schematic variable that ranges over . Let
(4) 
The following theorem asserts that the weights of the rules in that are learned from the multiple stable models are identical to the weights of the rules in that are learned from the single stable model that conjoins as in (4).
Theorem 3
For any parameterized program , its stable models and as defined as in (4), we have
Example 1
For the program , to learn from the three stable models , , and defined before, we consider the program
() and combine into one stable model . The weight in learned from the single data is identical to the weight in learned from the three stable models .
4.2 Learning in the Presence of Noisy Data
So far, we assumed that the data are (probabilistic) stable models of the parameterized program. Otherwise, the joint probability would be zero regardless of any weights assigned to the soft rules, and the partial derivative of is undefined. However, data gathered from the real world could be noisy, so some data may not necessarily be a stable model. Even then, we still want to learn from the other “correct” instances. We may drop them in the preprocessing to learning but this could be computationally expensive if the data is huge. Alternatively, we may mitigate the influence of the noisy data by introducing socalled “noise atoms” as follows.
Example 2
Consider again the program . Suppose one of the interpretations in the training data is . The interpretation is not a stable model of . We obtain by modifying to allow for the noisy atom as follows.
Here, is a positive number that is “sufficiently” larger than . is a stable model of , so that the combined training data is still a stable model, and thus a meaningful weight for can still be learned, given that other “correct” instances () dominate in the learning process (as for the noisy example, the corresponding stable model gets a low weight due to the weight assigned to but not 0).
Furthermore, with the same value of , the larger becomes, the closer the probability distribution defined by approximates the one defined by , so the value of learned under approximates the value of learned under where the noisy data is dropped.
4.3 Learning from Incomplete Interpretations
In the previous sections, we assume that the training data is given as a (complete) interpretation, i.e., for each atom it specifies whether it is true or false. In this section, we discuss the general case when the training data is given as a partial interpretation, which omits to specify some atoms to be true or false, or more generally when the training data is in the form of a formula that more than one stable model may satisfy.
Given a nonground program such that is not empty and given a ground formula as the training data, we have
The partial derivative of w.r.t. () turns out to be
It is straightforward to extend Algorithm 2 to reflect the extension. Computing the approximate value of the first term can be done by sampling on .
5 Weight Learning via Translations to Other Languages
This section considers two fragments of , for which the parameter learning task reduces to the same tasks for Markov Logic and ProbLog.
5.1 Tight Program: Reduction to MLN Weight Learning
By Theorem 3 in [Lee and Wang2016], any tight program can be translated into a Markov Logic Network (MLN) by adding completion formulas [Erdem and Lifschitz2003] with the weight . This means that the weight learning for a tight program can be reduced to the weight learning for an MLN.
Given a tight program and one (not necessarily complete) interpretation as the training data, the MLN is obtained by adding completion formulas with weight to .
The following theorem tells us that the weight assignment that maximizes the probability of the training data under programs is identical to the weight assignment that maximizes the probability of the same training data under an MLN .
Theorem 4
Let be the Markov Logic Network and let be a ground formula (as the training data). When is not empty,
( is a parameterized MLN obtained from .)
Thus we may learn the weights of a tight program using the existing implementations of Markov Logic, such as alchemy and tuffy.
5.2 Coherent Program: Reduction to Parameter Learning in ProbLog
For another special class of programs, weight learning can be reduced to weight learning in ProbLog [Fierens et al.2013].
We say an program is simple if all soft rules in are of the form
where is an atom, and no atoms occurring in the soft rules occur in the head of a hard rule.
We say a simple program is coherent () if, for any truth assignment to atoms that occur in , there are exactly probabilistic stable models of that satisfies the truth assignment. We also apply the notion of coherency when is parameterized.
Without loss of generality, we assume that no atom occurs more than once in . (If one atom occurs in multiple rules , these rules can be combined into .) A coherent program can thus be identified with the tuple , where is a list of (possibly nonground) atoms that occur as soft rules in , is a set of hard rules in , and is the list of soft rule’s weights, where is the weight of .
A ProbLog program can be viewed as a tuple where is a list of atoms called probabilistic facts, is a set of rules such that no atom that occurs in occurs in the head of any rule in , and is a list , where each is the probability of probabilistic atom . A parameterized ProbLog program is similarly defined, where is a list of parameters to be learned.
Given a list of probabilities , we construct a list of weights as follows:
(5) 
for .
The following theorem asserts that weight learning on a 1coherent program can be done by weight learning on its corresponding ProbLog program.
Theorem 5
For any 1coherent parameterized program and any interpretation (as the training data), we have
if and only if  
According to the theorem, to learn the weights of a 1coherent program, we can simply construct the corresponding ProbLog program, perform ProbLog weight learning, and then turn the learned probabilities into weights according to (5).
In [Lee and Wang2018], coherent programs are shown to be useful for describing dynamic domains. Intuitively, each probabilistic choice leads to the same number of histories. For such a coherent program, weight learning given a complete interpretation as the training data can be done by simply counting true and false ground instances of soft atomic facts in the given interpretation.
For an interpretation and , let and be the numbers of ground instances of that is true in and false in , respectively.
Theorem 6
For any coherent parameterized program , and any (complete) interpretation (as the training data), we have
6 Implementation and Examples
We implemented Algorithm 2 and its extensions described above using clingo, lpmln2asp, and a nearuniform answer set sampler xorro . The implementation lpmlnlearn is available at https://github.com/ywng485/lpmlnlearning together with a manual and some examples.
In this section, we show how the implementation allows for learning weights in from the data enabling learning parameters in knowledgerich domains.
For all the experiments in this section, is set to be . is fixed to and samples are generated for each call of MCASP. The parameters for xorro are manually tuned to achieve the best performance for each specific example.
6.1 Learning Certainty Degrees of Hypotheses
The weight learning algorithm can be used to learn the certainty degree of a hypothesis from the data. For example, consider a person carrying a certain virus contacting a group of people. The virus spreads among them as people contact each other. We use the following ASP facts to specify that carries the virus and how people contacted each other:
Consider two hypotheses that a person carrying the virus may cause him to have a certain disease, and the virus may spread by contact. The hypotheses can be represented in the input language of lpmlnlearn by the following rules, where w(1) and w(2) are parameters to be learned:
The parameterized program consists of these two rules and the facts about contact relation. The training data specifies whether each person carries the virus and has the disease, for example:
The learned weights tell us how certain the data support the hypotheses. Note that the program models the transitive closure of the carries_virus relation, which is not properly done if the program is viewed as an MLN.^{7}^{7}7That is, identifying the rule with a formula in firstorder logic . Learning under the MLN semantics results in weights that associate unreasonably high probabilities to people carrying virus even if they were not contacted by people with virus.
For example, consider the following graph
where A is the person who initially carries the virus, triangleshaped nodes represent people who carry virus in the evidence, and the edges denote the contact relation. The cluster consisting of E, F, and G has no contact with the cluster consisting of A, B, C, and D. The following table shows the probability of each person carrying the virus, which is derived from the weights learned in accordance with Markov Logic and , respectively. We use alchemy for the weight learning in Markov Logic.
Person  MLN  carries_virus  

(ground truth)  
0.823968  0.6226904833  Y  
0.813969  0.6226904833  Y  
0.818968  0.6226904833  N  
0.688981  0  N  
0.680982  0  N  
0.680982  0  N 
As can be seen from the table, under MLN, each of E, F, G has a high probability of carrying the virus, which is unintuitive.
6.2 Learning Probabilistic Graphs from Reachability
Consider an (unstable) communication network such as the one in Figure 1, where each node represents a signal station that sends and receives signals. A station may fail, making it impossible for signals to go through the station. The following rules define the connectivity between two stations X and Y in session T.
A specific network can be defined by specifying edge relations, such as edge(1,2). Suppose we have data showing the connectivity between stations in several sessions. Based on the data, we could make decisions such as which path is most reliable to send a signal between the two stations. Under the framework, this can be done by learning the weights representing the failure rate of each station. For the network in Figure 1, we write the following rules whose weights are to be learned:
Here is the auxiliary argument to allow learning from multiple training examples, as described in Section 4.1. The training example contains constraints either : not connected(X,Y) for known connected stations X and Y or : connected(X,Y) for known disconnected stations X and Y. Since the training data is incomplete in specifying the connectivity between the stations, we use the extension of Algorithm 2 described in Section 4.3. The failure rates of the stations can be obtained from the learned weights as .
We execute learning on graphs with nodes, where the graph with nodes is shown in Figure 1. We add layers of 2 nodes between Node and Node to obtain the other graphs, where there is an edge between every node in one layer and every node in the previous and next layer. Figure 2 shows the convergence behavior over time in terms of the sum of the absolute values of gradients of all weights. Running time is mostly spent by the uniform sampler for answer sets. The experiments are performed on a machine with 4 Intel(R) Core(TM) i52400 CPU with OS Ubuntu 14.04.5 LTS and 8 GB memory.
Figure 2 shows that convergence takes longer as the number of nodes increases, which is not surprising. Note that the current implementation is not very efficient. Even for graphs with nodes, it takes seconds to obtain a reasonable convergence. The computation bottleneck lies in the uniform sampler used in Step 2(c) of Algorithm 1 whereas creating and turning programs into ASP programs are done instantly. The uniform sampler that we use, xorro, follows Algorithm 2 in [Gomes, Sabharwal, and Selman2007]. It uses a fixed number of random XOR constraints to prune out a subset of stable models, and randomly select one remaining stable model to return. The process of solving for all stable models after applying XOR constraints can be very timeconsuming.
In this example, it is essential that the samples are generated by an ASP solver because information about node failing needs to be correctly derived from the connectivity, which involves reasoning about the transitive closure.
As Theorem 5 indicates, this weight learning task can alternatively be done through ProbLog weight learning. We use problog,^{8}^{8}8https://dtai.cs.kuleuven.be/problog/ an implementation of ProbLog. The performance of problog on weight learning depends on the tightness of the input program. We observed that for many tight programs, problog appears to have better scalability than our prototype lpmlnlearn. However, problog system does not show a consistent performance on nontight programs, such as the encoding of the network example above, possibly due to the fact that it has to convert the input program into weighted Boolean formulas, which is expensive for nontight programs.^{9}^{9}9The difference appears to be analogous to the different approaches to handling nontight programs by answer set solvers, e.g., the translationbased approach such as assat and cmodels and the native approach such as clingo. We can identify many graph instances of the network failure example where our prototype system outperforms problog, as the density of the graph gets higher. For example, consider the graph in Figure 1. With the nodes fixed, as we add more edges to make the graph denser, we eventually hit a point when problog does not return a result within a reasonable time limit. Below is the statistics of several instances.
# Edges  lpmlnlearn  ProbLog  

351.237s  2.565s  0.846s  
476.656s  2.854s  0.833s  
740.656s  20 min  0.957s  
484.348s  20 min  76.143s  
304.407s  20 min  26.642s 
The input files to problog consist of two parts: edge lists and the part that defines the node failure rates and connectivity. The latter is different for the second column and the third column in the table. For the second column it is the same as the input to lpmlnlearn:
For the third column, we rewrite the rules to make the Boolean formula conversion easier for problog. The input program is:^{10}^{10}10This was suggested by Angelika Kimmig (personal communication)
Although all graph instances have some cycles in the graph, the difference between the instance with 14 edges and 15 edges is the addition of one cycle. Even with the slight change in the graph, the performance of problog becomes significantly slower.
6.3 Learning Parameters for Abductive Reasoning about Actions
One of the successful applications of answer set programming is modeling dynamic domains. can be used for extending the modeling to allow uncertainty. A highlevel action language is defined as a shorthand notation for [Lee and Wang2018]. The language allows for probabilistic diagnoses in action domains: given the action description and the histories where an abnormal behavior occurs, how to find the reason for the failure? There, the probabilities are specified by the user. This can be enhanced by learning the probability of the failure from the example histories using lpmlnlearn.^{11}^{11}11ProbLog could not be used in place of here because it has the requirement that every total choice leads to exactly one well founded model, and consequently does not support choice rules, which has been used in the formalization of the robot example in this section. In this section, we show how weight learning can be used for learning parameters for abductive reasoning in action domains. Due to the selfcontainment of the paper, instead of showing descriptions, we show its counterpart in .
Consider the robot domain described in [Iwan2002]: a robot located in a building with 2 rooms r1 and r2 and a book that can be picked up. The robot can move to rooms, pick up the book, and put down the book. Sometimes actions may fail: the robot may fail to enter the room, may fail to pick up the book, and may drop the book when it has the book. The domain can be modeled using answer set programs, e.g., [Lifschitz and Turner1999]. We illustrate how such a description can be enhanced to allow abnormalities, and how the weight learning method can learn the probabilities of the abnormalities given a set of actions and their effects.
We introduce the predicate to represent that some abnormality occurred at step , and the predicate to represent that a specific abnormality occurred at step . The occurrences of specific abnormalities are controlled by probabilistic fact atoms and their preconditions. For example,
defines that the abnormality EnterFailed occurs with probability (controlled by the weighted atomic fact , which is introduced to represent the probability of the occurrence of EnterFailed) at time step if there is some abnormality at time step . Similarly we have
When we describe the effect of actions, we need to specify “no abnormality” as part of the precondition of the effect: The location of the robot changes to room if it goes to room unless abnormality EnterFailed occurs:
The location of the book is the same as the location of the robot if the robot has the book:
The robot has the book if it is at the same location as the book and it picks up the book, unless abnormality PickupFailed occurs:
The robot loses the book if it puts down the book:
The robot loses the book if abnormality occurs:
The commonsense law of inertia for each fluent is specified by the following hard rules:
For the lack of space, we skip the rules specifying the uniqueness and existence of fluents and actions, rules specifying that no two actions can occur at the same timestep, and rules specifying that the initial state and actions are exogenous.
We add the hard rule
to enable abnormalities for each timestep .
To use multiple action histories as the training data, we use the method from Section 4.1 and introduce an extra argument to every predicate, that represents the action history ID.
We then provide a list of 12 transitions as the training data. For example, the first transition (ID =1) tells us that the robot performed goto action to room r2, which failed.
Among the training data, enter_failed occurred 1 time out of 4 attempts, pickup_failed occurred 2 times out of 4 attempts, and drop_book occurred 1 time out of 4 attempts. The transitions are partially observed data in the sense that they specify only some of the fluents and actions; other facts about fluents, actions and abnormalities have to be inferred.
Note that this program is coherent, where is the number of actions (i.e., , and ) and is for no actions. We execute gradient ascent learning with 50 learning iterations and 50 sampling iterations for each learning iteration. The weights learned are
The probability of each abnormality can be computed from the weights as follows:
The learned weights of pf atoms indicate the probability of the action failure when some abnormal situation ab(I, ID) happens. This allows us to perform probabilistic diagnostic reasoning in which parameters are learned from the histories of actions. For example, suppose the robot and the book were initially at r1. The robot executed the following actions to deliver the book from r1 to r2: pick up the book; go to r2; put down the book. However, after the execution, it observes that the book is not at r2. What was the problem?
Executing system lpmln2asp on this encoding tells us that the most probable reason is that the robot fails at picking up the book. However, if we add that the robot itself is also not at r2, then lpmln2asp computes the most probable stable model to be the one that has the robot failed at entering r2.
7 Conclusion
The work presented relates answer set programming to learning from data, which has been underexplored, with some exceptions like [Law, Russo, and Broda2014, Nickles2016]. Via , learning methods developed for Markov Logic can be adapted to find the weights of rules under the stable model semantics, utilizing answer set solvers for performing MCMC sampling. Rooted in the stable model semantics, learning is useful for learning parameters for programs modeling knowledgerich domains. Unlike MCSAT for Markov Logic, MCASP allows us to infer the missing part of the data guided by the stable model semantics. Overall, the work paves a way for a knowledge representation formalism to embrace machine learning methods.
The current learning implementation is a prototype with the computational bottleneck in the uniform sampler, which is used as a blackbox. Unlike the work in machine learning, sampling has not been much considered in the context of answer set programming, and even the existing sampler we adopted was not designed for iterative calls as required by the MCMC sampling method. This is where we believe a significant performance increase can be gained. Using the idea such as constrained sampling [Meel et al.2016] may enhance the solution quality and the scalability of the implementation, which is left for future work.
PrASP [Nickles and Mileo2014] is related to in the sense that it is also a probabilistic extension of ASP. Weight learning in PrASP is very similar to weight learning in : a variation of gradient ascent is used to update the weights so that the weights converge to a value that maximizes the probability of the training data. In PrASP setting, it is a problem that the gradient of the probability of the training data cannot be expressed in a closed form, and is thus hard to compute. The way how PrASP solves this problem is to approximate the gradient by taking the difference between the probability of training data with current weight and with current weight slightly incremented. The probability of training data, given fixed weights, is computed with inference algorithms, which typically involve sampling methods.
In this paper, we only considered weight learning with the basic gradient ascent method. There are several advanced weight learning techniques and sophisticated problem settings used for MLN weight learning that can possibly be adapted to . For example, [Lowd and Domingos2007] discussed some enhancement to the basic gradient ascent, [Khot et al.2011] proposed a method for learning the structure and the weights simultaneously, and [Mittal and Singh2016] discussed how to automatically identify clusters of ground instances of a rule and learn different weight for each of these clusters.
Acknowledgments: We are grateful to Zhun Yang and the anonymous referees for their useful comments. This work was partially supported by the National Science Foundation under Grants IIS1526301 and IIS1815337.
References
 [Balai and Gelfond2016] Balai, E., and Gelfond, M. 2016. On the relationship between Plog and . In Proceedings of International Joint Conference on Artificial Intelligence (IJCAI), 915–921.
 [Baral, Gelfond, and Rushton2009] Baral, C.; Gelfond, M.; and Rushton, J. N. 2009. Probabilistic reasoning with answer sets. Theory and Practice of Logic Programming 9(1):57–144.
 [De Raedt, Kimmig, and Toivonen2007] De Raedt, L.; Kimmig, A.; and Toivonen, H. 2007. ProbLog: A probabilistic Prolog and its application in link discovery. In IJCAI, volume 7, 2462–2467.
 [Erdem and Lifschitz2003] Erdem, E., and Lifschitz, V. 2003. Tight logic programs. Theory and Practice of Logic Programming 3:499–518.
 [Ferraris, Lee, and Lifschitz2011] Ferraris, P.; Lee, J.; and Lifschitz, V. 2011. Stable models and circumscription. Artificial Intelligence 175:236–263.
 [Fierens et al.2013] Fierens, D.; Van den Broeck, G.; Renkens, J.; Shterionov, D.; Gutmann, B.; Thon, I.; Janssens, G.; and De Raedt, L. 2013. Inference and learning in probabilistic logic programs using weighted boolean formulas. Theory and Practice of Logic Programming,15(3), 358401. doi:10.1017/S1471068414000076
 [Gebser et al.2016] Gebser, M.; Schaub, T.; Marius, S.; and Thiele, S. 2016. xorro: Near uniform sampling of answer sets by means of XOR. https://potassco.org/labs/2016/09/20/xorro.html.
 [Gomes, Sabharwal, and Selman2007] Gomes, C. P.; Sabharwal, A.; and Selman, B. 2007. Nearuniform sampling of combinatorial spaces using XOR constraints. In Schölkopf, B.; Platt, J. C.; and Hoffman, T., eds., Advances in Neural Information Processing Systems 19. MIT Press. 481–488.
 [Iwan2002] Iwan, G. 2002. Historybased diagnosis templates in the framework of the situation calculus. AI Communications 15(1):31–45.
 [Khot et al.2011] Khot, T.; Natarajan, S.; Kersting, K.; and Shavlik, J. 2011. Learning Markov Logic Networks via functional gradient boosting. In 2011 11th IEEE International Conference on Data Mining, 320–329.
 [Law, Russo, and Broda2014] Law, M.; Russo, A.; and Broda, K. 2014. Inductive learning of answer set programs. In Logics in Artificial Intelligence. Springer. 311–325.
 [Lee and Wang2016] Lee, J., and Wang, Y. 2016. Weighted rules under the stable model semantics. In Proceedings of International Conference on Principles of Knowledge Representation and Reasoning (KR), 145–154.
 [Lee and Wang2018] Lee, J., and Wang, Y. 2018. A probabilistic extension of action language . Theory and Practice of Logic Programming,18(34), 607622. doi:10.1017/S1471068418000303
 [Lee and Yang2017] Lee, J., and Yang, Z. 2017. LPMLN, weak constraints, and Plog. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 1170–1177.
 [Lee, Meng, and Wang2015] Lee, J.; Meng, Y.; and Wang, Y. 2015. Markov logic style weighted rules under the stable model semantics. In Technical Communications of the 31st International Conference on Logic Programming.
 [Lee, Talsania, and Wang2017] Lee, J.; Talsania, S.; and Wang, Y. 2017. Computing LPMLN using ASP and MLN solvers. Theory and Practice of Logic Programming, 17(56), 942960. doi:10.1017/S1471068417000400
 [Lifschitz and Turner1999] Lifschitz, V., and Turner, H. 1999. Representing transition systems by logic programs. In Proceedings of International Conference on Logic Programming and Nonmonotonic Reasoning (LPNMR), 92–106.
 [Lowd and Domingos2007] Lowd, D., and Domingos, P. 2007. Efficient weight learning for markov logic networks. In European Conference on Principles of Data Mining and Knowledge Discovery, 200–211. Springer.
 [Meel et al.2016] Meel, K. S.; Vardi, M. Y.; Chakraborty, S.; Fremont, D. J.; Seshia, S. A.; Fried, D.; Ivrii, A.; and Malik, S. 2016. Constrained sampling and counting: Universal hashing meets SAT solving. In AAAI Workshop: Beyond NP.
 [Mittal and Singh2016] Mittal, H., and Singh, S. S. 2016. Fine grained weight learning in markov logic networks. In International Workshop on Statistical Relational AI.
 [Nickles and Mileo2014] Nickles, M., and Mileo, A. 2014. Probabilistic inductive logic programming based on answer set programming. In 15th International Workshop on NonMonotonic Reasoning (NMR 2014).
 [Nickles2016] Nickles, M. 2016. A tool for probabilistic reasoning based on logic programming and firstorder theories under stable model semantics. In European Conference on Logics in Artificial Intelligence (JELIA), 369–384.
 [Pearl2000] Pearl, J. 2000. Causality: models, reasoning and inference, volume 29. Cambridge Univ Press.
 [Poon and Domingos2006] Poon, H., and Domingos, P. 2006. Sound and efficient inference with probabilistic and deterministic dependencies. In AAAI, volume 6, 458–463.
 [Richardson and Domingos2006] Richardson, M., and Domingos, P. 2006. Markov logic networks. Machine Learning 62(12):107–136.
 [Wang and Zhang2017] Wang, B., and Zhang, Z. 2017. A parallel LPMLN solver: Primary report. In Working Notes of the Workshop on Answer Set Programming and Other Computing Paradigms (ASPOCP).
 [Wang et al.2018] Wang, B.; Zhang, Z.; Xu, H.; and Shen, J. 2018. Splitting an LPMLN program. In AAAI.
Appendix A Proofs
a.1 Proof of Theorem 1
Lemma 1
For any program and a probabilistic stable model of , we have
where
Proof. Let be the maximum number of hard rules in that any interpretation can satisfy. For any interpretation , we use as an abbreviation of “ is a probabilistic stable model of ”.
By definition we have
Splitting the denominator into two parts: those ’s that satify hard rules in and those that satisfy less hard rules, and extracting the weights of hard rules, , we have