CHI and UIST are premier venues in HCI and related fields. Publications in these conferences are counted toward CSRankings and are regarded as S-tier by KIISE.
We present Mod2Hap, a two-level modular haptic system that enables both body-scale and hand-scale interactions using magnetorheological (MR) fluid. The system comprises (1) configurable frame modules for constructing spatial interaction structures, which can be flexibly assembled to suit various body postures and environments, and (2) haptic modules—rotary (e.g., for dials or pedals) and linear (e.g., for sliders or handles)—that provide customizable handscale feedback interfaces. Each haptic module utilizes MR fluid, whose viscosity varies with applied magnetic fields, to generate tunable resistive feedback. Our design achieves a wide feedback range (0.12–0.52 N·m torque, 41–212 N force) by optimizing the solenoid coil for power efficiency and validating magnetic field distribution via simulation. We demonstrate Mod2Hap through three interactive scenarios—cycling, kayaking, and fishing—and evaluate its performance in a user study with 12 participants. Results show high perceived realism and engagement, supporting the system’s versatility, scalability, and effectiveness as an immersive haptic interaction platform.
The skin is an increasingly popular surface for pressure-based input, but the input performance varies between locations on the body. This study investigated how anatomical differences affect the pressure interaction. In our first experiment, we measured maximum and comfortable force (MCF) in the forearm, the back of the hand, and the knuckle. In the second, we examine how target size and location influence input accuracy and control. We found that larger targets reduced task difficulty, while bony areas, such as the knuckle, supported more stable control than soft regions, such as the forearm.
We present HapEar, a headset-mounted vibrotactile display that delivers 3D spatial cues through eight ERM actuators positioned around the ears. This demo allows participants to experience and compare single and multi-point stimulations for spatial localization, and to reflect on their performance, providing insights into ear-based tactile feedback for multisensory interaction.
Grain-based compliance illusion is a haptic technique that enables the perception of compliance on rigid surfaces. Previous research on threshold-based compliance illusions primarily focused on tuning parameters, but the comparative effects of different implementation strategies remain unexplored. This study compares two major implementation strategies: fixed threshold, where vibration is triggered whenever the applied force crosses a constant threshold, and adaptive threshold, where vibration occurs only when the change from the last vibration point crosses the threshold. We evaluated these strategies across eight interaction conditions combining variations in force magnitude, rate, and sustain. A four-way ANOVA revealed a statistically significant difference in perceived compliance values depending on the implementation strategy (p < 0.05), with the fixed threshold strategy yielding higher values. These results suggest that the choice of implementation strategy plays a critical role in shaping compliance perception, offering design implications for robust haptic illusions in immersive interfaces.
Grain-based compliance illusion mimics the mechanical vibrations that occur when a compliant object deforms with grain-like, short (∼15 ms) impulse-response vibrations. Previous work has demonstrated its robust effect on various types of devices. However, the impact of the device’s inherent compliance (i.e., base compliance) on perceived compliance remains unclear. This paper investigates the influence of base compliance on the perception of illusory compliance through three psychophysical experiments. The results show that (1) the compliance illusion remained effective with base compliance, (2) the description of compliance was affected by both illusory and base compliance, and (3) it is possible to render the compliance with the same magnitude but multiple different feelings.
2024
자신의 피부를 누르는 힘을 입력으로 사용하는 인터페이스에서의 사용자 힘 입력 성능 이해
Jiseong Kim, Dusan Beak, Aditya Shekar Nittala , and Jaeyeon Lee
피부는 다양한 변형이 가능해 다양한 상호작용이 가능한 인터페이스로 사용하기에 장점이 있다 이전 . 연구에 서 압력 입력이 광범위하게 연구되었지만 피부를 , . 사용한 맥락에서는 많이 이루어지지 않았다 본 논문에서는 사람이 피부에 압력을 입력으로 사용할 때 어떻게 수행하는지 탐구한다 우리는 . . 두 가지 실험의 결과를 제시한다 첫 번째 실험에서는 신체의 세 가지 위치에서 압력 입력을 수행할 때 편안하면서 가장 센 힘의 범위를 확인한다 두. 번째 실 험에서는 목표물의 크기와 신체 위치가 압력 입력에 미치는 영향을 조사한다 우리의 . 결과는 목표물의 크기가 신체 위치보다 더 큰 영향을 준다는 것을 보였다 이는 . 우리의 고유감각이 신체 위치에 상관 없이 압력 수준을 정확하게 제어할 수 있게 한다는 것을 의미한다 결과적으로 . 실험 결과는 신체 피부에서의 고유 감각적 압력 입력 설계에 있어 유용히 활용 될 것으로 기대한다.
3차원 진동 그리드 디스플레이를 컨트롤러의 후면과 측면에 부착, 조종 중인 물체 주변의 3차원 좌표 정보를 표현하는 방법을 조사하였다. 각도와 고도 정보 표현에 적합한 진동 패턴 디자인을 채택하기 위한 두 개의 사용자 실 험을 12명의 피험자 대상으로 진행하였다. 피험자의 정확도와 응답 속도를 비교하였을 때, 후면 그리드의 각도 표현 을 위하여 타원 또는 직사각형 형태의 진동 지점 렌더링 방식을 사용하는 것이 적합하며, 측면 그리드의 고도 표현을 위하여 진동의 시작점과 종료점을 검지의 끝부분으로 고정하는 것이 더 정확하고 빠른 인지를 제공함을 알 수 있다. 이후 실험을 통하여 각도와 고도를 동시에 표현할 수 있는 진동 패턴 디자인을 모색하고자 한다.
The paradigm of bare-hand interaction has become increasingly prevalent in Augmented Reality (AR) and Virtual Reality (VR) environments, propelled by advancements in hand tracking technology. However, a significant challenge arises in delivering haptic feedback to users’ hands, due to the necessity for the hands to remain bare. In response to this challenge, recent research has proposed an indirect solution of providing haptic feedback to the forearm. In this work, we present QuadStretcher, a skin stretch display featuring four independently controlled stretching units surrounding the forearm. While achieving rich haptic expression, our device also eliminates the need for a grounding base on the forearm by using a pair of counteracting tactors, thereby reducing bulkiness. To assess the effectiveness of QuadStretcher in facilitating immersive bare-hand experiences, we conducted a comparative user evaluation (n = 20) with a baseline solution, Squeezer. The results confirmed that QuadStretcher outperformed Squeezer in terms of expressing force direction and heightening the sense of realism, particularly in 3-DoF VR interactions such as pulling a rubber band, hooking a fishing rod, and swinging a tennis racket. We further discuss the design insights gained from qualitative user interviews, presenting key takeaways for future forearm-haptic systems aimed at advancing AR/VR bare-hand experiences.
가상 현실 (VR) 에서는 가상 물체의 위치, 방향, 크기 등 다양한 속성을 조작하며, 이같은 조작은 양방 향의 컨트롤을 필요로 한다. 양방향 컨트롤이 가능한 조작 방식으로 인에어 (In-air) 제스처, 조이스틱, 터치패 드 등의 등장성 (isotonic) 입력을 생각할 수 있으나 움직일 수 있는 공간의 한계, 제스처 인식의 한계 등의 한 계점을 가진다. 본 논문에서는 이러한 점으로부터 자유로운 등척성 (isometric) 힘 기반 입력을 고려하되, 기존 입력장치에서 상대적으로 덜 연구된 손의 신전력 (extension, 펴는 힘) 과 함께 굴곡력 (flexion, 오므리는 힘) 을 양방향 힘 입력으로 사용하는 장치의 실현 가능성을 탐구한다. 첫 번째 단계로 엄지와 손가락 사이의 굴곡 력 및 신전력을 감지하는 책상 고정형 프로토타입을 제작하였다. 이어서 각 방향의 힘이 낼 수 있는 최대 힘과 조작 성능에 기반하여 속도 제어와 위치 제어, 두 가지 제어 방식의 파라미터를 설정하였다. 각 방향의 힘과 제어 방식을 사용한 1-D 선택 실험 결과 굴곡력과 신전력은 컨트롤 방식에 관계없이 비슷한 성능을 보였으며, 속도 제어는 위치 제어에 비해 정확도 측면에서 50% 우수한 성능을 보였습니다. 본 연구 결과는 상대적으로 덜 사용되어 온 손의 신전력을 포함하는 양방향 힘 입력 성능이 충분히 우수하며, 사용자들이 양극성 조작에 굴곡력과 신전력을 직관적으로 사용할 수 있음을 보여준다.
Perceptibility of programmable softness displays using magnetorheological elastomers
Bingxu Li , Jaeyeon Lee, and Gregory J. Gerling
Proceedings of the Human Factors and Ergonomics Society Annual Meeting
While often focused on our visual system, adding touch to VR/AR environments can help render more immersive, richer user experiences. One important touch percept to render is compliance, or ‘softness.’ Herein, we evaluate the perceptibility of soft, magnetorheological elastomers (MRE) in bare-finger interactions. Such materials can be reprogrammed to distinct states of compliance. We fabricated MRE samples over elastic moduli from 23–173 kPa and measured that small 0.25 T magnetic fields increased modulus by 10–60 kPa. MRE interfaces less and more compliant than finger skin were evaluated in discrimination experiments with and without a magnetic field. The results indicate changes in modulus of 11 kPa are required to reach a 75% threshold of discrimination, although greater differences are required when an MRE’s elasticity is about the same as skin. The perceptual results with these magnetically-induced materials are similar to those with non-actuated, solid silicone-elastomers that mimic naturalistic interactions.
The key assumption attributed to on-body touch input is that the skin being touched provides natural tactile feedback. In this paper, we for the first time systematically explore augmenting on-body touch input with computer-generated tactile feedback. We employ vibrotactile actuation on the fingernail to couple on-body touch input with tactile feedback. Results from our first experiment show that users prefer tactile feedback for on-body touch input. In our second experiment, we determine the frequency thresholds for rendering realistic tactile “click” sensations for on-body touch buttons on three different body locations. Finally, in our third experiment, we dig deeper to render highly expressive tactile effects with a single actuator. Our non-metric multi-dimensional analysis shows that haptic augmentation of on-body buttons enhances the expressivity of on-body touch input. Overall, results from our experiments reinforce the need for tactile feedback for on-body touch input and show that actuation on the fingernail is a promising approach.
Being a little more careful about the sound that people produce is difficult in shared houses because individuals can generate several unintended living noises and sounds. We designed Tunee to help each housemate better understand the others’ context and desired noise-level. It is an interactive speaker that allows people to share noise-level preferences through the position change of nodes. Our three-week in-field study with four groups of participants revealed that expressing noise-level preference through nodes reduced the burden of verbally delivering issues about the trivial noises of everyday life, and the intentions of the lowered preference were referred to and deemed significant. We also identified how participants figured out what behavior was acceptable for others according to each noise-level. Our findings imply considerations in designing interfaces to support coordinating behaviors and awareness of social contexts in shared spaces.
We present PseudoBend, a haptic feedback technique that creates the illusion that a rigid device is being stretched, bent, or twisted. The method uses a single 6-DOF force sensor and a vibrotactile actuator to render grain vibrations to simulate the vibrations produced during object deformation based on the changes in force or torque exerted on a device. Because this method does not require any moving parts aside from the vibrotactile actuator, devices designed using this method can be small and lightweight. Psychophysical studies conducted using a prototype that implements this method confirmed that the method could be used to successfully create the illusion of deformation and could also change users’ perception of stiffness by changing the virtual stiffness parameters.
Recent hand-held controllers have explored a variety of haptic feedback sensations for users in virtual reality by producing both kinesthetic and cutaneous feedback from virtual objects. These controllers are grounded to the user’s hand and can only manipulate objects through arm and wrist motions, not using the dexterity of their fingers as they would in real life. In this paper, we present TORC, a rigid haptic controller that renders virtual object characteristics and behaviors such as texture and compliance. Users hold and squeeze TORC using their thumb and two fingers and interact with virtual objects by sliding their thumb on TORC’s trackpad. During the interaction, vibrotactile motors produce sensations to each finger that represent the haptic feel of squeezing, shearing or turning an object. We demonstrate the TORC interaction scenarios for a virtual object in hand.
In countries where languages with non-Latin characters are prevalent, people use a keyboard with two language modes namely, the native language and English, and often experience mode errors. To diagnose the mode error problem, we conducted a field study and observed that 78% of the mode errors occurred immediately after application switching. We implemented four methods (Auto-switch, Preview, Smart-toggle, and Preview & Smart-toggle) based on three strategies to deal with the mode error problem and conducted field studies to verify their effectiveness. In the studies considering Korean-English dual input, Auto-switch was ineffective. On the contrary, Preview significantly reduced the mode errors from 75.1% to 41.3%, and Smart-toggle saved typing cost for recovering from mode errors. In Preview & Smart-toggle, Preview reduced mode errors and Smart-toggle handled 86.2% of the mode errors that slipped past Preview. These results suggest that Preview & Smart-toggle is a promising method for preventing mode errors for the Korean-English dual-input environment.
The emerging class of epidermal devices opens up new opportunities for skin-based sensing, computing, and interaction. Future design of these devices requires an understanding of how skin-worn devices affect the natural tactile perception. In this study, we approach this research challenge by proposing a novel classification system for epidermal devices based on flexural rigidity and by testing advanced adhesive materials, including tattoo paper and thin films of poly (dimethylsiloxane) (PDMS). We report on the results of three psychophysical experiments that investigated the effect of epidermal devices of different rigidity on passive and active tactile perception. We analyzed human tactile sensitivity thresholds, two-point discrimination thresholds, and roughness discrimination abilities on three different body locations (fingertip, hand, forearm). Generally, a correlation was found between device rigidity and tactile sensitivity thresholds as well as roughness discrimination ability. Surprisingly, thin epidermal devices based on PDMS with a hundred times the rigidity of commonly used tattoo paper resulted in comparable levels of tactile acuity. The material offers the benefit of increased robustness against wear and the option to re-use the device. Based on our findings, we derive design recommendations for epidermal devices that combine tactile perception with device robustness.
Interpersonal touch, one of the most primitive social languages, is an excellent design element frequently used in interaction design. In this paper, we present a richer understanding of it by using spatial factors and social relations among people, which has rarely been explored in interactive systems. We designed an interactive installation called "TouchBranch" where players can move light between branches placed at various distances by connecting their bodies. The user studies were conducted with 21 groups consisting of intimates, acquaintances, and strangers. We observed a change in the interpersonal touch pattern and touch tolerance according to each factor. Interestingly, the effect of the social relation was dramatic, but that of the spatial factor was not quantitatively significant. Nevertheless, we discovered that spatial factor can influence the interpersonal touch experience. Based on the results, we discuss in this paper the influence of two factors on the interpersonal touch that stands out in the context of interactive systems.
A tactile display on the back of a smartwatch is an attractive output option; however, its channel capacity is limited owing to the small contact area. In order to expand the channel capacity, we considered using two perceptually distinct types of stimuli, wind and vibration, together on the same skin area. The result is a multimodal tactile display that combines wind and vibration to create "colored" tactile sensations on the wrist. As a first step toward this goal, we conducted in this study four user experiments with a wind-vibration tactile display to examine different ways of combining wind and vibration: Individual, Sequential, and Simultaneous. The results revealed the sequential combination of wind and vibration to exhibit the highest potential, with an information transfer capacity of 3.29 bits. In particular, the transition of tactile modality was perceived at an accuracy of 98.52%. The current results confirm the feasibility and potential of a multimodal tactile display combining wind and vibration.
Researchers are proposing many approaches to overcome the usability problem of a smartwatch owing to its small touchscreen. One of the promising approaches is to use touch-sensing edges to expand the control space of a smartwatch. We considered possible interaction techniques using touch-sensing edges in combination with the smartwatch touchscreen: single-edge, multi-edge, and edge × screen (edge and touchscreen in combination). We call these techniques square watch interaction (SWI) techniques in this paper because they exploit the form factor of a square smartwatch. To explore the design space and evaluate the usability of the SWI techniques, we implemented a square smartwatch prototype with touch-sensitive edges, and conducted a series of user experiments. The experiment results showed that the SWI techniques enable precise 1D pointing and occlusion-free 2D pointing. The experiments also produced empirical data that reflect human manual skills for the edge × screen techniques. The produced empirical data will provide a practical guideline for the application of the edge × screen techniques.
Traditional wearable tactile displays transfer tactile stimulations through a firm contact between the stimulator and the skin. We conjecture that a firm contact may not be always possible and acceptable. Therefore, we explored the concept of a non-contact wearable tactile display using an airflow, which can transfer information without a firm contact. To secure an empirical ground for the design of a wearable airflow display, we conducted a series of psychophysical experiments to estimate the intensity thresholds, duration thresholds, and distance thresholds of airflow perception on various body locations, and report the resulting empirical data in this paper. We then built a 4-point airflow display, compared its performance with that of a vibrotactile display, and could show that the two tactile displays are comparable in information transfer performance. User feedback was also positive and revealed many unique expressions describing airflow-based tactile experiences. Lastly, we demonstrate the feasibility of an airflow-based wearable tactile display with a prototype using micro-fans.
A watch-back tactile display (WBTD) is expected to be a viable supplement to the user interface limitations of a smartwatch. However, its design requires that many design parameters such as tactor types and stimulus patterns be determined. We conducted a series of experiments to explore the design space of a WBTD consisting of 3×3 tactors. We demonstrated that tactor types and the temporal patterns and locus of a stimulus produce statistically significant effects on the efficiency of a WBTD. The experimental results can act as a practical guideline for the design of an efficient WBTD.