Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
object_pose_representation [2012/12/01 14:29] – tenorth | object_pose_representation [2014/06/05 11:38] (current) – external edit 127.0.0.1 | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== Object pose representation ====== | + | #REDIRECT doc:object_pose_representation |
- | + | ||
- | Information about the poses and dimensions of objects is crucial for finding and manipulating them. In KnowRob, object dimensions are described as simple bounding boxes or cylinders (specifying the height, and either width and depth or the radius). While this is clearly not sufficient for grasping, we chose this description as a compromise in order not to put too many details like point clouds or meshes into the knowledge base. Such information is rather linked and stored in specialized file formats. | + | |
- | + | ||
- | Object poses are described via homography matrices. Per default, the system assumes all poses to be in the same global coordinate system. Pose matrices can, however, be qualified with a coordinate frame identifier. The robot can then transform these local poses into the global coordinate system, for example using the [[http:// | + | |
- | + | ||
- | Since robots act in dynamic environments, | + | |
- | + | ||
- | Memory, prediction, and planning, however, are central components of intelligent systems. The reason why the naive approach does not support such qualified statements is the limitation of OWL to binary relations that link exactly two entities. These relations can only express if something is related or not, but cannot qualify these statements by saying that a relation held an hour ago, or is supposed to hold with a certain probability. For this purpose, we need an additional instance in between that links e.g. the object, the location, the time, and the probability. | + | |
- | + | ||
- | ===== Pose representation in KnowRob ===== | + | |
- | In KnowRob, these elements are linked by the event that created the respective belief: the perception of an object, an inference process, or the prediction of future states based on projection or simulation. The relation is thus reified, that is, transformed into a first-class object. These reified perceptions or inference results are described as instances of subclasses of MentalEvent, | + | |
- | {{ : | + | |
- | + | ||
- | Object recognition algorithms, for instance, are described as sub-classes in the VisualPerception tree. Multiple events can be assigned | + | |
- | to one object, describing different detections over time or differences between the current world state and the state to be achieved. The resulting internal representation is visualized below. Based on information from the vision system, KnowRob generates VisualPerception instances that link the object instance icetea2 to the different locations where it is detected over time. | + | |
- | + | ||
- | {{ : | + | |
- | + | ||
- | Using this representation, | + | |
- | + | ||
- | (text taken from [[http:// | + |