全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...
-  2018 

Using Vector Representation of Propositions and Actions for STRIPS Action Model Learning
Using Vector Representation of Propositions and Actions for STRIPS Action Model Learning

DOI: 10.15918/j.jbit1004-0579.18072

Keywords: automated planning action model learning vector representation of propositions
automated planning action model learning vector representation of propositions

Full-Text   Cite this paper   Add to My Lib

Abstract:

Action model learning has become a hot topic in knowledge engineering for automated planning. A key problem for learning action models is to analyze state changes before and after action executions from observed "plan traces". To support such an analysis, a new approach is proposed to partition propositions of plan traces into states. First, vector representations of propositions and actions are obtained by training a neural network called Skip-Gram borrowed from the area of natural language processing (NLP). Then, a type of semantic distance among propositions and actions is defined based on their similarity measures in the vector space. Finally, k-means and k-nearest neighbor (kNN) algorithms are exploited to map propositions to states. This approach is called state partition by word vector (SPWV), which is implemented on top of a recent action model learning framework by Rao et al. Experimental results on the benchmark domains show that SPWV leads to a lower error rate of the learnt action model, compared to the probability based approach for state partition that was developed by Rao et al.
Action model learning has become a hot topic in knowledge engineering for automated planning. A key problem for learning action models is to analyze state changes before and after action executions from observed "plan traces". To support such an analysis, a new approach is proposed to partition propositions of plan traces into states. First, vector representations of propositions and actions are obtained by training a neural network called Skip-Gram borrowed from the area of natural language processing (NLP). Then, a type of semantic distance among propositions and actions is defined based on their similarity measures in the vector space. Finally, k-means and k-nearest neighbor (kNN) algorithms are exploited to map propositions to states. This approach is called state partition by word vector (SPWV), which is implemented on top of a recent action model learning framework by Rao et al. Experimental results on the benchmark domains show that SPWV leads to a lower error rate of the learnt action model, compared to the probability based approach for state partition that was developed by Rao et al.

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133