A Review of Tactile Information: Perception and Action Through Touch

Tactile sensing is a key sensor modality for robots interacting with their surroundings. These sensors provide a rich and diverse set of data signals that contain detailed information collected from contacts between the robot and its environment. The data are however not limited to individual contacts and can be used to extract a wide range of information about the objects in the environment as well as the actions of the robot during the interactions. In this article, we provide an overview of tactile information and its applications in robotics. We present a hierarchy consisting of raw, contact, object, and action levels to structure the tactile information, with higher-level information often building upon lower-level information. We discuss different types of information that can be extracted at each level of the hierarchy. The article also includes an overview of different types of robot applications and the types of tactile information that they employ. Finally we end the article with a discussion for future tactile applications which are still beyond the current capabilities of robots.


I. INTRODUCTION
T OUCH is an important sensing modality for robots physically interacting with their environment. Tactile sensing provides robots with a rich set of diverse signals based on Manuscript received January 7, 2020; accepted May 16, 2020. Date of publication July 8, 2020; date of current version December 3, 2020. This work was supported in part by Deutsche Forschungsgemeinschaft (DFG) project SPP priority program Autonomous Learning (SPP 1527-1) and in part by Deutsche Forschungsgemeinschaft(DFG) project DEXMAN (LI 2811/1-1). This paper was recommended for publication by Associate Editor L. Natale and Editor A. Billard upon evaluation of the reviewers' comments. contacts between the robot and its environment. These signals provide the robot with information about the objects in its environment as well its interactions with these objects, e.g., if an object begins to slip or if the robot has achieved a sturdy foothold. This information is crucial for performing a variety of tasks in a robust and reliable manner, including dexterous manipulation, locomotion, and human-robot interactions (HRIs). However, to use tactile sensing, robots first need to extract the relevant information from the sensor signals. We present an overview of the different types of tactile information that can be extracted from tactile signals. This information ranges from low-level forces at individual contact points to feedback for selecting complex actions based on previous interactions. We propose structuring the information in a hierarchy consisting of contact-, object-, and action-level information.
Higher-level information tends to build on the information extracted from lower levels as illustrated in Fig. 1. While the contact level contains information from individual contact sites, the object level contains information regarding objects as a whole, often acquired from multiple contacts. The action level refers to information pertaining to the actions of the robot used to interact with the environment, which often builds upon contactand object-level information. As a contact-based sensing modality, actions are also important for extracting useful information from the interactions. All three of these levels build upon the sensor-level tactile signals. For each level of the hierarchy, we describe the different types of information that are commonly 1552-3098 © 2020 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
extracted at that level. We also explain how robot applications employ tactile information from across the different levels. A number of previous tactile review papers focused on the sensor hardware [1]- [4], and discussed specific application domains, e.g., grasping and in-hand manipulation [5]- [7]. More recently Luo et al. [8] showed progress on tactile perception with regard to object properties. Akihiko and Christopher [9] reviewed the progress on tactile manipulation and put the highlight on the vision-based tactile sensors and its applications. In this article, we propose a computational framework to structure tactile perception and actions. We highlight the different types of tactile information and propose a hierarchy for structuring the information. We also explain how various robot applications employ tactile information from across the hierarchy. The rest of this article is organized as follows. Section II introduces an overview of tactile signals. Then in Sections III-V, we explain in detail tactile information at each level of the hierarchy. In Section VI, we briefly discuss analytical and data-driven methods for computing the tactile information. In Section VII, the applications are discussed with regard to how tactile information is used in rich tactile-based tasks, including tactile exploration, grasping, in-hand manipulation, tool manipulation, locomotion, HRI, and nonprehensile manipulation. Finally, Section VIII concludes this article.

II. SENSOR-LEVEL TACTILE SIGNALS
A robot with the sense of touch can acquire information about its surroundings through physical interactions. The sensors are located under the surface of artificial skin [10], [11], [3], [12] and provide the robot with a wide range of signals. In this section, we briefly discuss common types of sensory signals that will form the basis of the computational framework proposed in this article. We also discuss tactile sensor coverage-how sensors are distributed on the body and how spatial resolution is often selected depending on the given task and the sensor placement on the robot.

A. Normal and Tangential Force
The most common type of tactile signal is the contact force. Although the tactile sensors are based on different physical principles, the output (tactile image, voltage) from the raw sensor measurements can be mapped to or "calibrated" as contact forces. Traditionally, the contact force is divided into two components: normal force and shear or tangential force. Normal force is the component orthogonally applied to the contact surface, and tangential force is the component applied across the surface, e.g., friction. Most tactile sensors are able to measure normal force [13], [14]. Some sensors can however directly measure the full 3-D force [15]- [19]. The majority of tactile sensors are composed of arrays of sensing elements [3], [20]- [22], and each element in the array is known as a taxel. Each taxel provides a local force estimate corresponding to its activation. In addition to contact force, the contact torque may also be estimated by the sensor array [23], [24].

B. Vibration
Mechanical vibrations are another fundamental type of tactile signals that are often used to detect contact or slip events between contact surfaces [25]- [28]. When we use a hammer to strike a nail [29], we can feel the vibration via the mechanoreceptors of our skin [30]. A robot can similarly detect vibrations using a dynamic tactile sensor, in some cases, acoustic sensor (microphones can be used to detect vibration and thereby serve tactile sensing purposes). Vibration signals from tactile sensors can be used to detect whether slip occurs between the sensors and the directly contacted object or between a grasped object and another object. This is possible because the vibrations in the latter case propagate additionally through the grasped object and the tactile patterns perceived by the sensors in the two situations are therefore different [27], [31]. Like the fast afferents in human skin, vibration-based sensing is not as useful in static-contact situations, as motion is required to induce vibrations [32].

C. Thermal
Thermal tactile sensing allows a robot to measure the temperature of an object via touching [33], [34]. Similar to the thermal sensing of humans, robots are able to discriminate temperatures between 5°and 45° [35]. Thermal sensing can also be combined with heating elements to allow the robot to estimate the thermal conductivity of an object [22]. The differences in thermal conductivity can then be used to distinguish between different object materials [36], [37]. For example, metals transfer heat faster than most plastics and rubbers. Determining the thermal state of the contacted object is also very important to decide the next actions, e.g., a robot can estimate the temperature of a cup to autonomously reason whether it is safe for serving. In the teleoperation scenario, thermal sensing can help the operator to better judge the types of remote objects [38].

D. Pretouch Proximity
Pretouch is the ability to detect objects and their surfaces before contacting with them. A pretouch sensor can provide a robot with the relative geometrical relation to an object [39], which is valuable for robot planners [40], [41] and controllers [42]- [44]. With the distance, the robot can roughly estimate the shape and position of the object before making contact. Depending on the underlying technology, the pretouch sensors may also provide additional information such as color and optical flow [45]. Pretouch sensors can also provide the robot with better predictions regarding when the contact will be made and thus prevent the robot from making contact at high speeds. A variety of technologies for pretouch sensors has been developed, and they are based on capacity [46], magnetic [47], cutaneous infrared [48], audio [49], and optical sensing [50]. In this manner, pretouch sensing blurs the line between different sensor modalities.

E. Sensor Coverage
Besides the type of tactile signals, another important characteristic of tactile sensing is the distribution of sensors throughout the body of the robot. Body parts covered with high spatial Fig. 2. Shadow Dexterous Hand (left) that has been covered with a tactile skin [51] (middle) in order to provide tactile information at several locations (highlighted green regions on the right rendering). Fig. 3. NAO humanoid robot covered with a multimodal artificial robotic skin that provides vibration, temperature, force, and proximity information [52]. resolution tactile sensors can be used for contact recognition and fine motor control, while parts with lower resolution of sensing can be used for simple contact detection and control. Most tactile skin designs for robots are inspired by the human tactile perception system, in which neuroscientists have found that the sensitive tactile receptors come in four different types with different-sized receptive fields and are unevenly distributed in the whole body [53]. For example, hands have higher tactile spatial resolutions than other parts to facilitate dexterous manipulation.
Many robot hands are equipped with tactile sensors, especially at the fingertips [24], [54], [55]. An example of a robot hand with tactile sensors on the finger tips, phalanges, and palm [51] can be seen in Fig. 2. High spatial and temporal resolutions provide hands with rich tactile information for performing complex tasks, e.g., exploring unknown objects, tool use, and in-hand manipulation.
Tactile sensors can also be embedded in other robot body parts, such as arms [56], [57], torso (Fig. 3), legs [58], and feet [59]. Contact information on these parts are closely associated with tasks involving whole-body encounters with obstacles, HRI, and locomotion. The density of the sensing elements is coarser on these parts because the sampling and processing of large areas of taxels are challenging from a hardware design and communication perspective [3]. The tactile sensors in load bearing parts of the body are often designed in a more robust way to withstand the larger forces needed to grasp heavy objects [60] and support the weight of the robot [61] during walking tasks. Tactile sensors are normally located on the surfaces of regularly shaped links. However, flexible and stretchable sensors have been developed to cover body parts with complex shapes and curved surfaces [62]- [64], as well as the joints of the robot [65].

III. CONTACT-LEVEL INFORMATION
The basic function of tactile sensing is to provide a robot with information about the contact sites, such as local geometries, forces, material properties, and contact events (Fig. 4). In this section, we discuss the type of contact information and how the contact information is extracted from raw tactile signals. This information forms the basic building blocks for the object-level information (Section IV) and is widely used for controlling the robot (Section V).

A. Contact Geometry
Tactile sensing is widely used to estimate the local geometry of a contact area including the contact position, normal, and curvature. Many tactile-related tasks rely on precise measurements of the contact position and normal direction, e.g., to compute grasp stability and manipulability [66]. Given a calibrated sensor array [67], the contact position in the tactile sensor frame is computed as the pressure-weighted center of the activated taxels and then mapped to the Cartesian frame of the robot using its forward kinematics [68], [69]. Using the same principle, the contact normal direction is estimated as the pressure-weighted normal direction of the activated taxels. The position and normal provide a first-order approximation of the local surface patch, with the normal direction defining the tangent plane at the contact point.
Tactile sensors can also be used for estimating higher surface derivatives at the contact point, e.g., the surface curvature [70], [71]. Given a tactile sensor with high spatial resolution, the contact surface curvature can be measured based on the relative movement between the sensor and an unknown object when sliding or rolling occurs [70]. For sensors with lower resolutions, the contact surface can be approximated with a second-order polynomial equation [66], [71]. The curvature at the contact point is approximated by a local parameterized surface matrix that can be estimated through active tactile exploration.
Tactile image processing is another way to extract contact geometry of a small region. Machine-learning [72], [73] and traditional image-based approaches [68], [74]- [76] are widely used to estimate the position of contacts and identify salient features such as edges, corners, and small protrusions. For example, a pronounced curvature on the surface of the object will appear as a line in the tactile image. The slope of the line, relative to the tactile sensor, can be estimated by the principal component of the image blob [77]. The 3-D direction of the edge is then estimated by combining the slope of the line with the forward kinematic model of the robot.

B. Force and Torque
Contact forces and/or torques are directly measured by several tactile sensors [13], [21], [22], [45], [65], [72]. Some sensors only provide the normal force while others also provide tangential forces to give the full 3-D force vector. Estimating forces at contacts is important, as it is the transfer of forces that allows the robot to physically interact with objects in its environment. When employing a point contact model, the contact force can be estimated from the contact pressures of all activated taxels [21], [68] or from the overall deformation of the sensor [45], [72].
Different approaches have been proposed for mapping raw tactile pressure values to the contact forces and torques [78]- [80]. One example is to model the mapping as a linear function [81], [82]. The drawback of this approach is that it does not work well for sensors that have strong hysteresis or damping properties due to the skin material [2]. For such skin materials, other advanced machine learning algorithms are more suitable. For example, locally weighted projection regression and artificial neural networks can be used to learn the nonlinear relation [26], and Gaussian processes can be used to compensate for the nonlinear hysteresis effects of sensors [83]. Finite-element methods have also been used to model the nonlinear functions and learn the model parameters [78].
Several approaches have also been proposed for estimating contact forces and torques without tactile array. Most of these approaches rely on intrinsic tactile sensing, using motor torques [84], [85], motor torques and joint values [86] or hydraulic pressure values [87] to estimate contact forces and torques.

C. Contact Events
Contact events refer to discrete changes in the contact state between two objects, which include making, breaking, sliding, and rolling contacts. Contact events often correspond to subgoals and errors when interacting with the environment. For example, breaking and making contact between a foot and the ground is the goal of each step in a walking task, while a sliding foothold is usually considered to be an error.
The transition between contact and no contact is a nearinstantaneous event that results in a discontinuous interaction model. It is therefore important that the robot can detect the events quickly and reliably. A common way to detect the contact is to compare the measured tactile signals with a given threshold [65], [88], which is estimated from previous contact experiences. Another method is to use dynamic tactile sensors [89], which compute the vibration features to distinguish contact from no contact. Using this approach, it is possible to detect the contacts not only between the skin of the robot and the environment but also between a grasped object and the environment [90].
In addition to transitions from no contact to contact, the robot may also experience the change from a static contact to a sliding contact. Many approaches assume no slip or consider slip to be an error as it is difficult to control. Robust detection of slip is therefore a crucial research topic. In practice, gross sliding is usually preceded by an incipient slip event, wherein only some low-pressure regions of the contact patch begin to slide [89]- [92]. That is to say, the outer regions on a fingertip begin slipping while the central region, where more pressure is applied, does not slip [68], [93]. Hence, detecting incipient slip is a key capability for avoiding gross sliding from happening. Incipient slip can be detected with high-frequency tactile feedback (normally > 1 kHz) using frequency-domain analysis approaches [88], [89], [92], [94]. Other approaches include learning from data and using image processing techniques, e.g., using a Hebbian network [95], using optical flow from a fingertip camera [96], or using random forests to predict slip from multimodal tactile information [97]. Robots can additionally differentiate between different types of slip, e.g., determining whether the gross sliding is rotational or translational [25], [26], [45], [76], [98]. For such classification, neural networks are trained to classify time-series tactile pressure data or visual features for vision-based tactile sensors [26], [98].

D. Material Properties
The interactive nature of touch allows a robot to estimate material properties of objects that may not be easily extracted visually. Researchers have defined 15 different properties to represent the surfaces of objects [99] using tactile information. These properties can be reduced to five key dimensions, namely stiffness, friction, surface texture, thermal conductivity, and adhesion. Stiffness can be measured by the robot pressing into the object with a specified contact force and measuring the resulting displacement of the contact point [100]- [103]. Similarly the coefficient of static friction can be estimated by measuring the normal and tangential forces when incipient slip occurs [104], [105].
Texture information is useful for differentiating among materials, as well as detecting blemishes or smooth surfaces. To achieve good classification results, a robot will often slide the tactile sensor across the surface and observe the resulting vibrations and time series signals [106]. Classifiers, such as k nearest neighbors (kNN), artificial neural network (ANNs), and support vector machine (SVMs), can then be trained to classify the different textures [107]- [110]. Better classification performance can be achieved by employing multiple sliding motions with varying velocities and directions [111].
Thermal conductivity is another useful material property. While the temperature of an object is estimated by a thermal sensor, the thermal conductivity of the material is estimated by the transfer rate of thermal energy [22], [112]. Given that materials conduct heat at different rates, the detected transfer rate can also be used to classify different materials [37], [113].

IV. OBJECT-LEVEL INFORMATION
Many tasks involve interacting with or manipulating unknown objects. To perform these tasks reliably, the robot needs to estimate the state and properties of the manipulated objects. Tactile sensors can be used to acquire a wide range of object-level information via raw sensor values or by combining contact-level information during interactions. Inspired by human exploration actions for determining object properties [32], we illustrate several common actions to extract object-level information in Fig. 5.

A. Object Localization
A robot needs to accurately estimate the pose of an object in order to precisely manipulate it to a desired location. Localizing an object using tactile feedback only is not a trivial task. Approaches are commonly based on filtering theory [114]- [118]. These approaches use the measured contact positions and normals of the object as input. Probabilistic models of the pose of the object are then updated over time to capture the uncertainty of the estimation over multiple interactions [117].
In addition to accuracy, the efficiency of estimation is also important especially for tactile exploration tasks. To this end, optimization approaches can be used to select informative actions and explore the most uncertain pose space [116], [119]- [121]. It is often assumed that the object is static and not affected by the exploration [114], [119], [120], [122]. Researchers have also proposed methods for tracking the pose of an object while it is being moved [115], [123].

B. Shape
The shape of an object refers to its global geometry. Shape information is often needed for plan interactions with objects. For example, a robot with a two-fingered gripper needs to find opposing surfaces of an object for grasping. The shape can be reconstructed by measuring the positions and normals of surface patches [124]. Employing tactile sensing is especially useful for estimating the shape of an object in visually occluded regions [125]- [127]. The tactile sensors can thus play a complementary role to vision sensors for updating the model of an object.
By touching the object at one single location, tactile sensors only estimate the local shape of relatively small regions, so multiple contacts are needed in order to obtain the global shape [128]. This procedure is time consuming, and extensive efforts have therefore been focused on optimizing touch sequences to reduce the uncertainty of the estimated shape and improve the model as quickly as possible [129]- [132]. The shapes are often represented by Gaussian processes to model the uncertainty [131] or by fitting geometrical models to the tactile point clouds [116].

C. Mass and Dynamics
While an object is grasped or manipulated, it acts as an additional payload for the robot. The mass and center of mass (CoM) are then required for the accurate dynamics model of the new system composed by the robot arm and hand plus the grasped object [66]. When grasping an object, the robot should place its hand near the center of mass of the object to avoid large torques. Conversely, the robot can estimate the center of mass by slightly lifting the object and observing the torques. Correctly estimating the mass and center of mass, allows the robot to avoid improper contact force by using the estimated information to adapt the grasp force.
The estimation of the mass and CoM of the object requires an interactive control procedure. Different action strategies and learning approaches have been proposed for estimating them and also the inertial matrix of the object [121], [133]- [137]. While grasping a heavy object, a robot can estimate the mass parameters of the object based on the force and torque measurements from the wrist [133], [138]. For light objects, tactile estimates of the fingertip forces during simple lifting actions can be used to localize the center of mass of unknown objects [134], [135]. For an object that is too large to grasp, a robot can estimate its mass parameters by tipping it and stabilizing it in a different posture [136], [139].

D. Contents of Containers
The interactive nature of touch allows tactile sensing to estimate properties of objects that would otherwise be latent. In addition to the mass properties of an object, a robot may also determine the contents of container objects. For example, a Fig. 6. Large arrows at the top show a sequence of action executions. In the lower part, horizontal baselines indicate the time window when the tactile information is being extracted. The arrows indicate the time point when the action is monitored and the action is performed or adapted. Action selection and initialization uses tactile signals from the previous interaction to select and set the parameters of the current action. Low-level control uses the sensory data from each time step to select the next low-level control input during the action execution. Action termination monitors the action at each time step and either continues the action (red) or terminates it (blue) if a specific contact even has been detected. Action outcome detection then determine if the executed action was successful, or which type of error occurred, based on the tactile signals acquired during the action execution. Action outcome verification uses an additional action to generate more tactile signals for determining the outcome. robot may determine if a nonrigid container is full or empty, as well as open or closed, by squeezing it and observing the resulting tactile signals [140]. In addition to squeezing actions, a robot can shake a container to estimate the amount of material inside or the material properties of the contents [111], [141], [142]. One example approach estimated the viscosity of a liquid within a container by using a learned Gaussian process model and selecting different shaking behaviors to actively acquire the best estimate [141].

V. ACTION-LEVEL INFORMATION
For most applications, robots need to execute sequences of actions to finish more complex tasks [143], e.g., grasping, transporting, and releasing for pick-and-place tasks or executing a sequence of steps for walking tasks. Tactile sensing can be used to compute action-level information for performing and monitoring the complex tasks at all stages (see Fig. 6). Action-level information may be computed from contact-level [26], [88], [97] and object-level [115], [123], [134], [135] information as well as directly from tactile signals [27], [144]- [147].

A. Action Selection and Initialization
Tactile data from previous actions and interactions can be used to select the next action and initialize its parameters. Robots can use previous actions to explore the objects and extract objectlevel information, and then use the updated object model for initializing the current action [115], [116], [121], [124], [148], [149].
The exploratory actions themselves are selected to acquire additional information based on the current belief for the properties of the object [120]. For example, when the robot has acquired a partial 3-D model of an object based on previous tactile feedback, then the next grasp should be selected to explore locations where the model is still uncertain [127], [131]. Similarly, the robot may select other exploration parameters, e.g., different normal forces and velocities for a series of finger sliding movements, to better recognize an object [106].
Action selection is also used for recovering from failed grasps and other actions. When a robot grasp attempt fails, the robot can use the data from the failure to extract latent object properties and adapt the grasp for the next attempt accordingly [146], [150]. The resulting regrasping strategy thus has a higher likelihood of succeeding by exploiting the tactile information to select the next grasp.
Tactile servoing uses feedback from tactile features to control the pose of the robot end-effector and maintain a certain amount of pressure while interacting with objects with various material properties [77], [82], [134], [153]- [155]. This approach allows the robot to actively extract geometric and material features of unknown objects [156], [157] in an exploration way and cope with the geometry and material uncertainties of the contacted objects. It is also useful for tracing the edges of objects [77]. In servoing tasks, action and perception are tightly coupled as the robot needs to continuously adapt its actions to the current tactile signals.
In addition to continuous servoing control, the robot may also compute actions to respond to certain contact-level events, e.g., incipient slip. Reacting to these events requires quick reflexes. In many cases, incipient slip detection triggers an increase in the normal force exerted by the controller [25], [92]. Rather than waiting for the slip event, the robot may also learn to predict these events in advance based on tactile sensing [26], [97].

C. Action Termination
Action termination allows a robot to determine if the current action should be continued or terminated, such that the robot can then switch to another action. Terminating an inappropriate action early allows the robot to reduce the negative effects of colliding with objects or similar errors. Early termination also allows the robot to avoid errors when a goal is reached earlier than expected.
To perform action termination, the robot continuously monitors the tactile signals and learns a binary classifier. Given the model classifier and the current tactile data, the robot can determine if the action should be continued or terminated [158]. Another way to decide whether an action should be terminated is to learn a model of the expected sensory signals [159]. The robots then compare the predictive output of this model to the actual sensor values during the action execution [155], [160]. The action is terminated if the deviation from the expected model output exceeds a given threshold.

D. Action Outcome Detection
Actions are not guaranteed to always succeed, even when robust tactile feedback is used in the control loop. Therefore, it is important for the robot to determine that the goal is reached, and no error occurred during execution. Outcome detection can be formulated as a classification problem where the classifier has a binary output indicating if the action was successful or not [161]. The robot may also attempt to determine the specific type of error if one occurred. The outcome classification may be achieved with a probabilistic approach that computes the probability of a successful outcome [150]. For the input, the robot may use the entire time series tactile signal from the action execution. However, often only the final frame is used as it tends to be highly informative for determining the outcome [145], [162].
Outcome detection has been used for evaluating grasp stability [145], [150], [161]- [164]. In this case, classifiers or probabilistic models are used to determine if a grasp was successful before attempting to lift the object. Rather than continuing the planned sequence of actions, a detected failure may trigger a regrasping action or a replanning strategy [150]. Outcome detection can thus serve a similar role to action initialization, but for the subsequent action.
Outcome detection is also important for providing the robot with additional information for future executions of the action. The robot can use outcome detection as feedback for learning actions from experience [144], [165]. Successful outcomes give greater rewards and thus encourage similar action executions in the future.

E. Action Outcome Verification
Many outcomes are ambiguous given only the observed tactile signals during the action execution. A robot therefore needs to use interactive perception to disambiguate these situations. By applying an additional action, and observing the resulting tactile signals, a robot can estimate more reliably the outcome of the previous action. For example, a robot may attempt to perturb a screw sideways to verify that it was correctly inserted into a hole [166]. A robot may also attempt to lift an object to verify that a grasp was successful [146]. Similar with direct outcome detection, the outcome verification can be posed as a classification problem. However, the input data are provided by the tactile signals from the following actions. Outcome verification requires extra time and effort to perform the subsequent action, and it may change the state as a result, but it also provides useful information for determining the action outcome.

VI. TACTILE COMPUTATION: ANALYTICAL AND DATA-DRIVEN MODELS
Tactile computation approaches can generally be divided into two groups-analytical or data-driven.
Analytical approaches exploit physics-based models to compute tactile information [71], [82], [89], [105], [117], [131], [138]. From the descriptions of tactile information in Sections III-V, it is clear that many of the processed signals have a clear physical meaning. Higher-level information, such as object properties and action commands, are computed from the raw tactile signals or from contact information by accumulating interactions and using physical models. By utilizing the principles of physics to create models of the environment, human operators can easily understand the perception procedure of the robot and its decision-making processes. However, these models normally rely on structured interactions and accurate feedback signals. If precise information of the interactions is not available and more complex tasks are being considered, simplified assumptions about the interactions must be taken to facilitate the design of the models.
Another approach is to employ data-driven methods to compute and process the tactile information implicitly [79], [150], [159], [167], [168]. These approaches learn mappings from raw sensory signals, or lower-level features, to high-level object properties and action commands. Supervised, unsupervised, and even reinforcement learning methods can be used to learn suitable features for a variety of tactile tasks. Hierarchical representations, such as neural networks, are often used to learn multiple levels of features. The learned intermediary features are generally not interpretable by humans. Data-driven methods generally do not require a precise model of the interaction and they tend to avoid brittle assumptions. Flexible representations allow the robot to adapt the learned model to the specific task based directly on data. It is often easier to provide data from contact-based interactions than to predefine an accurate analytical model.

VII. APPLICATIONS OF TACTILE INFORMATION
Rich tactile information provides lots of possibilities for performing tactile-relevant tasks, which include: tactile exploration, grasping, in-hand manipulation, locomotion, tool manipulation, HRI, and nonprehensile manipulation. In this section, we summarize the relation between tactile information, computation methods, and applications in Table I. We describe how robots can compute and use contact-, object-, and action-level information within these complex task domains.

A. Tactile Exploration
Tactile exploration is an effective way to extract properties of an unknown object through touch [116], [121], [156], [157]. Humans use many exploration procedures to obtain knowledge about objects [32]-lateral motions, pressure, enclosure, contour following, object part motion tests, and affordance tests. Inspired by these findings, many tactile-based exploration methods have been developed for robots to estimate important parameters including contact-level information such as local geometry [71] and material properties [106], as well as object-level information such as shape [169], [170] and mass information [134]. In addition to analytical approaches, some researchers have proposed data-driven approaches to learning the contact geometry and properties of objects [80], [172] using large tactile datasets.
Another representative application of tactile sensing in exploration is tactile servoing control. This is mainly computing the action-level information to maintain the desired contact pattern defined by contact-level information considering single [77], [153] or multiple contact areas [171]. In the example of single contact [77], [153], a tactile planar array is assembled on the endeffector of the arm. The desired tactile pattern is the specified contact position and force. The tactile array explores the surface of the object by implementing sliding and rolling actions. The goal of the action information is to minimize the deviation of the tactile pattern. This approach can also be used to control a robot hand for exploration [171]. This controller used multiple contact areas on the hand and explored the surface of an unknown object to improve the grasping capability of the robot.

B. Nonprehensile Manipulation
Nonprehensile manipulation involves interacting with objects without explicit grasping. These types of manipulations include pushing, poking, punching, hooking, pivoting, flipping, throwing, squeezing, twirling, and striking. Tactile sensing in nonprehensile manipulation is often used to model the contacts [27] and provide low-level feedback control [175], [181] as well as monitor the action status and estimate object properties [177], [178].
Different levels of tactile information are used for nonprehensile manipulation. Researchers have proposed methods for using contact-level slip information for pushing tasks [27], in which the tactile classifier was represented by a convolutional neural network. Contact-level information can also be used to compute feedback information for opening doors using a hook [175]. Another nonprehensile tactile controller was learned [181] for rolling tasks. Object-level information, e.g., center of mass, can be learned by pushing objects and subsequently used for controllers to perform more complex actions [177].

C. Grasping
Grasping is one of the most widely researched aspects of robot manipulation [231], as it provides a robot with control over the grasped objects and it is a common prerequisite for tool usage. Similar to tactile exploration, tactile grasping is an important method to extract object properties. Some researchers extracted the contact-level information [90], [100] and object-level information [98], [114], [120], [137] using analytical methods, while others computed them with data-driven approaches [184]- [187].
Once the object has been grasped, tactile feedback can be used to control the contact forces [234], [235] and detect incipient slip [26], [90], [92]. By detecting or predicting incipient slip using tactile data, the robot can automatically increase its grip force to avoid gross slip without having to explicitly estimate the object-finger friction coefficients [236].

D. In-Hand Manipulation
In-hand manipulation involves using the dexterity of the robot hand to change the state of a grasped object. For this task, some work focused on the computation of contact-and object-level information [191], [193], [167], [200], but the majority of work was to study the tactile controllers to relocate the grasped objects. The controllers have been performed with either multifingered robot hands [24], [194], [198], [237]- [240] or grippers [241]- [243].
For local repositioning while maintaining contact, i.e., ingrasp manipulation, the object is precisely grasped, and then moved with the fingertips of the robot [194]. Tactile sensing is used to estimate the contact and object information and actively control the contacts between the fingers and the object [195], [197]. Alternatively, tactile sensing can also be used to directly learn a mapping from the contact sensor information to the desired joint velocities of the robotic fingers [144], [201] using data-driven approaches.
In order to move the object further [244], robots need use finger gaiting to switch between different grasps while maintaining the object in hand. To this end, the fingers need tactile sensing to detect breaking and making contact with the object to effectively walk the hand around the surface of the object. For dexterous hands, researchers have proposed using humaninspired finger gaiting strategies [196]. In order to imitate the human, demonstrations of gaiting behaviors with tactile signals can be acquired by fitting the human subject's hands with tactile finger caps [245].

E. Tool Manipulation
Tool usage is a core aspect of many manipulation tasks [246].
A key aspect of employing tactile sensing in tool manipulation is that the task contacts are between an object and the held tool. The contacts are therefore not directly on the tactile sensors, unless the tool is itself instrumented [203].
Tactile sensing can be used to detect contacts on tools, localize tooltips and other important parts of the manipulated tools [31], monitor the progress of tasks, and detect manipulation failures. Tactile feedback can also be used to estimate and maintain the orientation and forces at the contact points to perform compliant control tasks. Tactile sensing can also be employed in a dualarm setup to estimate the kinematic parameters of a grasped tool [204].
Tactile sensing is used for controlling and using unknown tools. The main challenge is that no kinematic or dynamic manipulation model can be directly used to compute the tactile action information for a given task. To this end, data-driven methods [154], [206], [207] become a valid approach to compute the action commands implicitly. Tool tasks often involve multiple stages of contact interactions. For example, a peg may slide freely or become jammed during an insertion task. Tactile sensing allows the robot to interactively detect these different types of contact modes [158] and plan recovery actions accordingly [166].

F. Locomotion
Tactile sensing is not only useful for manipulation tasks but also for locomotion. Ground vehicles and walking robots, such as humanoids, quadruped, hexapod, and snake robots, need to use contacts with the environment to move around. Tactile sensing provides them with estimates of the state of the robot and the environment terrain [104], [247] during locomotion. For example, wheeled robots can use tactile sensing to monitor their contact with the ground [213]- [215]. Vibration signals from microphones or accelerometers in the wheels can be used for determining the type of terrain.
Maintaining balance is a key part of both standing and walking. To maintain balance, the robot needs to use tactile sensing to estimate its contact locations for support [104], as well as detect obstacles and other perturbations that may cause it to lose balance [211]. The tactile sensing can also be used for learning a tactile-motor mapping for standing [210]. Walking requires deliberately planning and controlling gaits, which consist of multiple phases [248]. The transitions between phases are often triggered by contact events, such as the heel contacts with the ground. Tactile sensing has also been used to study where to place a foot [212]. By estimating the type of terrain, a robot can generate appropriate gaits and switch to a suitable leg controller.

G. Human-Robot Interaction
In addition to interacting with inanimate objects and terrain, robots also need to physically interact with humans. HRI applications span a wide range of different tasks including intuitive programming [220], putting on clothing [226], handing over objects [249], safe interaction with collaborative robots [250], [251], and shaving [221]. During these tasks, the robot need to ensure that the interaction forces are safe and adapt to the human's body.
In the context of HRI, recent progress has explored inferring the human's latent state and intentions from sensory feedback [252]- [254]. For example, during a handover, the robot can employ vision and tactile feedback to determine when the human has a suitable grip and is ready to accept the object [227]. Similarly, when performing collaborative tasks such as carrying large objects, the robot measures interactive forces and torques through the force/torque sensor on the wrists for performing the task and reacting accordingly [222].
In addition to performing tasks, HRI can also be used to teach robots new skills from demonstrations [255]. In this manner, the robot can be directly guided by a human using force controllers [256]. Physical interactive corrections can also be used to refine robots' skill executions through touch [228]. For instance, dressing is a fundamental task of everyday living and robots offer an opportunity to assist people with motor impairments. In [219] and [225], authors present a method to track a person's pose in real time using capacitive proximity sensing. Using tactile information a robot can adjust for errors in the estimated pose of a person and physically follow the contours and movements of the person while providing dressing assistance.

VIII. CONCLUSION
The sense of touch is important for robotic systems to perceive physical properties of objects necessary for safe object interaction and manipulation. It provides feedback for adapting the executed actions of the robots. In this article, we discussed the main types of information that can be acquired using tactile signals as well as different types of applications in which this information can be used. We explained how tactile sensing can be used to acquire data related to individual contacts, objects, and skills. These different types of information can be extracted from the sensor signals using analytical techniques or data-driven approaches. The proposed categorization of tactile perception approaches is thus applicable to various task domains.
Our review of tactile information focused on the different types of estimation problems that robots must overcome, rather than the specific types of methods used to address these problems. In this manner, we provided a structure for this research field that can be applied both to past work as well as future research. Although we have made great strides as a research community, none of the presented problems should be considered as solved.
In addition to developments for the individual challenges that we discussed, we also expect to see more approaches that collect contact-and object-level information while performing actions for a given task. Methods for contact-and object-level information currently tend to employ purely exploratory skills, e.g., stroking and prodding [156], [171] or basic manipulations, e.g., simple grasping and lifting [114], [120], [135]. Similarly, action-level methods often assume that object and contact information are already provided or it is fixed and therefore does not need to be explicitly represented [122], [131]. Future work on tactile sensing will also explore deeper on how analytical and data-driven approaches can be combined more efficiently. Current methods often focus either on analytical [71], [89] or datadriven approaches [150], [167], [201]. Combining these two approaches will allow robots to work efficiently by exploiting their prior knowledge and adapting to novel situations autonomously.