09:00-09:05 Welcome opening
Lectures
09:05-09:30 (20 min + 5 min Q&A)
Title: New emerging concepts in biological touch and haptics neuroscience
Abstract: In studies of biological touch sensing, and in engineered haptics systems taking inspiration from biology, it is often implicitly assumed that each skin sensor is activated independently of others. An implied consequence is that our skin as a sensing organ can be understood as a set of ‘taxels’, tactile pixels where each independent sensor have a specific tuning and that is all there is to know. Here, I review evidence that biological touch sensing instead rely on complex distributed mechanical couplings across the skin, causing the activation of large numbers of skin sensors. And because of the mechanical couplings, the activation pattern of each sensor has a dependency, or a specific relationship, to those of other sensors. This provides for extremely rich, high-dimensional information being generated in the nervous system whenever a haptic interaction is made. Recent neurophysiological data support this as the organizational principle for brain processing of haptic information. Further studies of the representations in the neocortex of haptic information implies a revision of how we view the neocortical circuitry function, with profound discrepancies to today’s dominating architectures of Artificial Neural Networks (ANN/DNN).
09:30-09:55 (20 min + 5 min Q&A)
Title: Emergence of higher-order cognitive mechanisms: a report from robotic experiment studies extending active inference
Abstract: This study investigates how higher-order cognitive mechanisms could be developed through iterative learning of robots by extending the idea of active inference (AIF). The study focuses on self-organization of cognitive mechanisms for visual attention, visual working memory manipulations, and the prefrontal-cortex-like executive controls and also emergence of the conceptual space. Our synthetic robotic experiment studies on goal-directed plan generation in multiple object manipulation tasks by extending AIF suggest that such higher cognitive mechanisms can be developed through content-agnostic as well as context-sensitive interaction between the higher and the lower levels during the task learning.
Reference:
Queißer, J. F., Jung, M., Matsumoto, T., & Tani, J. (2021). Emergence of Content-Agnostic Information Processing by a Robot Using Active Inference, Visual Attention, Working Memory, and Planning. Neural Computation, 33(9), 2353–2407.
09:55-10:20 (20 min + 5 min Q&A)
Title: Task-Driven In-Hand Manipulation of Unknown Objects with Tactile Sensing
Abstract: In-hand manipulation of objects without an object model is a foundational skill for many tasks in unstructured environments. In many cases, vision-only approaches may not be feasible; for example, due to occlusion in cluttered spaces or by the hand. I will present an approach to reorient unknown objects by incrementally building a probabilistic estimate of the object shape and pose during task-driven manipulation. Our method leverages Bayesian optimization to strategically trade-off exploration of the global object shape with efficient task completion. We demonstrate our approach on a Tactile-Enabled Roller Grasper, a gripper that rolls objects in hand while continuously collecting tactile data. We evaluate our method in simulation on a set of randomly generated objects and find that our method reliably reorients objects while significantly reducing the exploration time needed to do so. On the Roller Grasper hardware, we show successful qualitative reconstruction of the object model.
09:55-10:20 (20 min + 5 min Q&A)
Title: Electronic skins for robotics and wearables
Abstract: The human skin is a large-area, multi-point, multi-modal, stretchable sensor, which has inspired the development of an electronic skin for robots to simultaneously detect pressure and thermal distributions. By improving its conformability, the application of electronic skin has expanded from robots to human body such that an ultrathin semiconductor membrane can be directly laminated onto the skin. Such intimate and conformal integration of electronics with the human skin, namely, smart skin, allows for the continuous monitoring of health conditions. The ultimate goal of the smart skin is to non-invasively measure human activities under natural conditions, which would enable electronic skins and the human skin to interactively reinforce each other. In this talk, I will review recent progresses of stretchable thin-film electronics for applications to robotics and wearables and address issues and the future prospect of smart skins.