Screenshot 2022-03-15 at 21.59.47.png

Program 

Local Time in Kyoto-Japan (GMT +9)

 October 23, 2022

09:00-09:05     Welcome opening

Burdet_edited_edited.jpg

Etienne Burdet

DP9A2973_edited_edited_edited_edited.jpg

Mohsen Kaboli

Lectures 

Henrik Jörntell

Henrik Jörntell

9:05-9:30  (20 min + 5 min Q/A)

 

Title: New emerging concepts in biological touch and haptics neuroscience    

Abstract: In studies of biological touch sensing, and in engineered haptics systems taking inspiration from biology, it is often implicitly assumed that each skin sensor is activated independently of others. An implied consequence is that our skin as a sensing organ can be understood as a set of ‘taxels’, tactile pixels where each independent sensor have a specific tuning and that is all there is to know. Here, I review evidence that biological touch sensing instead rely on complex distributed mechanical couplings across the skin, causing the activation of large numbers of skin sensors. And because of the mechanical couplings, the activation pattern of each sensor has a dependency, or a specific relationship, to those of other sensors. This provides for extremely rich, high-dimensional information being generated in the nervous system whenever a haptic interaction is made. Recent neurophysiological data support this as the organizational principle for brain processing of haptic information. Further studies of the representations in the neocortex of haptic information implies a revision of how we view the neocortical circuitry function, with profound discrepancies to today’s dominating architectures of Artificial Neural Networks (ANN/DNN).

Jun Tani_edited_edited.jpg

Jun Tani

9:30-9:55  (20 min + 5 min Q/A)

 

Title: Emergence of higher-order cognitive mechanisms: a report from robotic experiment studies extending active inference

Abstract: This study investigates how higher-order cognitive mechanisms could be developed through iterative learning of robots by extending the idea of active inference (AIF). The study focuses on self-organization of cognitive mechanisms for visual attention, visual working memory manipulations, and the prefrontal-cortex-like executive controls and also emergence of the conceptual space. Our synthetic robotic experiment studies on goal-directed plan generation in multiple object manipulation tasks by extending AIF suggest that such higher cognitive mechanisms can be developed through content-agnostic as well as context-sensitive interaction between the higher and the lower levels during the task learning.


Reference:
Queißer, J. F., Jung, M., Matsumoto, T., & Tani, J. (2021).  Emergence of Content-Agnostic Information Processing by a Robot Using Active Inference, Visual Attention, Working Memory, and Planning. Neural Computation, 33(9), 2353–2407. 

Jeannet.jpeg

Jeannette Bohg

9:55-10:20  (20 min + 5 min Q/A)

 

Title: Task-Driven In-Hand Manipulation of Unknown Objects with Tactile Sensing

Abstract: In-hand manipulation of objects without an object model is a foundational skill for many tasks in unstructured environments. In many cases, vision-only approaches may not be feasible; for example, due to occlusion in cluttered spaces or by the hand. I will present an approach to reorient unknown objects by incrementally building a probabilistic estimate of the object shape and pose during task-driven manipulation. Our method leverages Bayesian optimization to strategically trade-off exploration of the global object shape with efficient task completion. We demonstrate our approach on a Tactile-Enabled Roller Grasper, a gripper that rolls objects in hand while continuously collecting tactile data. We evaluate our method in simulation on a set of randomly generated objects and find that our method reliably reorients objects while significantly reducing the exploration time needed to do so. On the Roller Grasper hardware, we show successful qualitative reconstruction of the object model.

Prof. Takao Someya.jpeg

Takao Someya

10:20-10:45  (20 min + 5 min Q/A)

 

Title: Electronic skins for robotics and wearables

Abstract: The human skin is a large-area, multi-point, multi-modal, stretchable sensor, which has inspired the development of an electronic skin for robots to simultaneously detect pressure and thermal distributions. By improving its conformability, the application of electronic skin has expanded from robots to human body such that an ultrathin semiconductor membrane can be directly laminated onto the skin. Such intimate and conformal integration of electronics with the human skin, namely, smart skin, allows for the continuous monitoring of health conditions. The ultimate goal of the smart skin is to non-invasively measure human activities under natural conditions, which would enable electronic skins and the human skin to interactively reinforce each other. In this talk, I will review recent progresses of stretchable thin-film electronics for applications to robotics and wearables and address issues and the future prospect of smart skins. 

perla-maiolino.jpeg

Perla Maiolino

10:45-11:10  (20 min + 5 min Q/A)

 

Title: What does it take to skin a robot? From sensing technology to perception: methods for Control, Data Processing and learning for artificial skin.

Abstract: Robots operating in dynamic and unstructured environments must exhibit advanced forms of interaction with objects and humans. “Sense of Touch” in robots can play a fundamental role in enhancing perceptual, cognitive, and operative capabilities of robots, specifically when they physically interact with objects and humans in the environment. However, to endow robots with sense of touch is not an easy task. It requires not only to find solutions to scale up tactile sensing technologies to enable to cover the whole robot body, but also to develop new methods for control, data processing and learning which are targeted for a sensor integrated on general 3D surfaces. This talk will give an overview of a control method to exploit artificial skin to navigate in cluttered environment while controlling interaction forces, a method for data processing of tactile sensors on 3D shapes and data-driven architectures for cross-modal object recognition and tactile data generation.

Paper presentation - I

    11:10-11:15

  • Deep Active Cross-Modal Visio-Tactile Transfer Learning for Robotic Object Recognition - P. Murali, C. Wang, D. Lee, R. Dahiya, M. Kaboli

 

    11:15-11:20

  • Impact of compliance of the skin on haptic  information for complex stimuli - K. Kesgin, Y. Massalim, V. Hayward, H. Jörntell

     11:20-11:25

  • A Neuromorphic System for Real-time Tactile Texture Classification - G. Brayshaw, B. W. Cherrier, M. Pearson

     11:25-11:30

  • A Predictive Coding approach for active object property inference - A. Dutta, P. Murali, E. Burdet, M. Kaboli

     11:35-11:35

  • Active Inference for Active Localization in Tactile Robotics - P. Craig, L. Aitchison, N. Lepora

    11:45-11:40

  • Towards sensory restoration and augmentation: mapping visual distance to audio and tactile frequency - P. Jiang, J. Rossiter, C. Kent

    11:40-11:45

  • Power-Efficient and Accurate Texture Sensing Using Spiking Readouts for High-Density e-Skins - M. D. Alea, A. Safa, J. V. Assche, G. Gielen 

    11:45-12:00 

      Q&A on Paper presentation I

12:00-13:00     Lunch Break & Poster Session 

E2448712-9E69-4259-8C7E-2A3CEEAA833E.png

Robert Bruckmeier

13:00-13:25  (20 min + 5 min Q/A)

 

Title: AI based vehicle functions

Abstract: The presentation will lay out recent AI predevelopment topics for BMW vehicles. More specifically, it will address the general importance of AI, start with the AI lead use case of  automated driving including safe AI and scene generation, and continue with the customer-facing topics of emotion detection and humanlike conversation. It concludes with performance optimizations, first applications in quantum computing, and a perspective on AI regulation.

Screenshot 2022-08-18 at 23.16.03.png

Harold Soh

13:25-13:50  (20 min + 5 min Q/A)

 

Title: Tactile-based Physical Skills for Human-Robot Interaction

Abstract: My group — the Collaborative, Learning, and Adaptive Robots (CLeAR) lab — seeks to improve people’s lives through intelligent robotics. Our central focus has been on developing physical and social skills for robots. This talk will give an overview of our work on giving robots a physical skill: the sense of touch. We’ll detail our work on event-driven touch sensing and perception. If time permits, we’ll discuss very recent work showing how physical and social aspects can potentially be combined for intelligent grasping using discriminator gradient flows.

tapo.jpeg

Tapomayukh Bhattacharya

13:50-14:15  (20 min + 5 min Q/A)

 

Title: Building Caregiving Robots: A Tale of Three Sensing Modalities

Abstract: How do we build robots that can assist people with mobility limitations with activities of daily living? To successfully perform these activities, a robot needs to be able to physically interact with humans and objects in unstructured human environments over large contact surfaces. Multimodal sensing can enable a robot to rapidly infer properties of contact with its surroundings. This talk will showcase how a robot can use the interplay between force, thermal, and visual sensing modalities during manipulation to perceive properties of these physical interactions using data-driven methods and physics-based models. I will also touch upon some of our recent efforts on developing a simulation platform for caregiving robots "RCareWorld", that enable realistic physical interactions across the whole robot arm with virtual human avatars built using clinical data.

Paper presentation - II

     14:15-14:20

  • GTac-Hand: A Robotic Hand with Integrated Tactile Sensing and External Contact Sensing Capabilities - Z. Lu, H. Guo, D. Carmona, S. Bhattacharya, H. Yu

     14:20-14:25

  • Towards interactive visio-tactile perception for robust pose estimation in clutter - P. Murali, A. Dutta, M. Gentner, E. Burdet, R. Dahiya, M. Kaboli

     14:25-14:30

  • A Multi-Functional Soft Sensor using Heterogenous Sensing Mechanism for Physical Human-Robot Interfaces - T. Kim,  S. Lee, T. Hong, G. Shin, T, Kim, Y. L. Park

     14:30-14:35

  • Pneumatic tactile sensor design with acoustic resonance - M. Li, T. M. Huh, J. Aderibigbe, C. R. Yahnker, H. S. Stuart

     14:35-14:40

  • E-Troll : A Simple Robotic Gripper for Tactile Shape Classification via In-Hand Manipulation - X. Zhou, A. Spiers

     14:40-14:45

  • GTac: A Biomimetic Tactile Sensor with Skin-Like Heterogenous Force Feedback for Robots - Z. Lu, X. Gao, H. Yu

     14:45-14:50

  • Single-Input Single-Output Multi-Touch Soft Sensor Systems using Band-Pass Filters - J. Kim, S. Kim, Y. L. Park

     14:50-15:00 

      Q&A on Paper presentation II

Burdet_edited_edited.jpg

Etienne Burdet

15:00-15:25 (20 min + 5 min Q/A)

 

Title: A predictive coding approach of haptic exploration

Abstract: We are developing a predictive coding approach enabling a robot to extract haptic information during interaction with the environment and thus identify various objects. In this talk, i will first present our method to recognise objects based on online identification (using a dual Kalman filter) of representative mechanical properties from the position, interaction force and vibrations. 20 objects could be identified well, with a higher recognition rate as compared to using statistical features of the interaction signals. I will then present the electronic skin we have developed to enable a robot to also perceive object’s geometric information from distributed tactile array. Critically, our eSkin can vary its viscoelasticity, enabling us to understand its role in processing the mechanical interaction signals. This is used to recognise objects with different shapes, textures and compliance, highlighting its efficacy to develop haptic perception.

Thrish.jpg

Thrishantha Nanayakkara

15:25-15:50  (20 min + 5 min Q/A)

 

Title: Embodiment and likelihood functions of haptic perception

Abstract: When a physician examines a patient’s abdomen to feel various tissue conditions, they would continuously tune the posture and stiffness of the hands and fingers to sharpen haptic perception. They would also orient the fingers depending on the nature of touch information such as the movement of the lower edge of the liver, swells in the intestine etc. Such attempts to search in the kinematic and dynamic space of hands get more intensified when there is ambiguity in the visual feedback such as pain expressions in the face in response to palpation forces on painful areas. In this talk, I will show some robot assisted training examples where we capture interesting phenomena of tuning the kinematics and dynamics of the body to sharpen haptic perception in pure haptic tasks as well as visuo-haptic tasks of realtime estimation of random variables.

vincent_hayward_edited_edited.jpg

Vincent Hayward

15:50-16:15  (20 min + 5 min Q/A)

 

Title: Human tactile mechanics show surprising properties 

Abstract: Tactile sensing is inherently mechanical. Since robotics takes inspiration from living creatures it is worth visiting the properties of human extremities in some depth. Human finger pads, which contribute fundamentally to prehension, are also sensing organs. In this presentation we will comment on a number of their properties which clearly impact both prehension and sensing. It is hoped that some of these finding could be applied to the design and control of robot hands.

Screenshot 2022-07-15 at 10.06.30.png

Oliver Brock 

16:15-16:40  (20 min + 5 min Q/A)

 

Title: Tactile? Yes! But What and How?

Abstract: Tactile sensing and resulting control methods are likely critical enablers of robotic manipulation and dexterity. When we think of tactile sensors, we often look to biological sensors, such as skin, as design goals. When we think about control, however, we rarely seek biological inspiration. Instead, we still resort to traditional approaches, modeling everything and then using control methods under the assumption that models are sufficiently accurate. In this talk, I will explore possible implications of taking an integrated approach to manipulation in which hardware, sensing, and control are in tune to each other. In such a scenario, sensing can be distributed across explicit, physical sensors and the implicit sensing realized through compliance. Similarly, control can be distributed across actuators and features of a compliant morphology. In such a scenario, do the requirements for sensing change? If so, how? Do we need different types of sensors? Are we thinking too narrowly if we use encoders and cameras as conceptual analogues for sensors in the context of manipulation?

16:40-17:00     Panel Discussion (20 minutes)

Best Paper Award & Concluding Remarks

17:00-21:00     Mingling & Workshop Dinner