NeuroWearX for Empowered Co-Intelligence Call for Papers
Call for Papers
2026 IEEE International Conference on Systems, Man, and Cybernetics (SMC 2026)
https://www.ieeesmc2026.org/
IEEE SMC Workshop:
NeuroWearX for Empowered Co-Intelligence:
Advancing Human-Machine Interfaces by Integrating Biological,
Physical, and Spatial Intelligence with Generative AI
Overview
Wearables, assistive robots, and smart IoT systems are rapidly
becoming part of everyday life. However, many of today's wearable
computing and human-machine interfaces still feel "smart, yet
not quite helpful." They are often fragmented, difficult to
personalize, and frequently struggle to transform rich but noisy
multimodal signals, such as physiology (e.g., heart rate,
neural/muscle activity), body movement, and environmental context,
into reliable, meaningful real-world support. At the same time,
generative AI and foundation/world models are reshaping the landscape,
shifting the paradigm beyond isolated sensors and single-purpose
algorithms toward integrated, adaptive, context-aware
Human-AI-Machine systems.
To make the use of these technologies feel like a natural extension of
ourselves, this workshop brings together researchers and practitioners
across biosensing and measurement, neuroscience, AI, robotics,
ubiquitous computing, and human-centered design to define the next
frontier of Empowered Co-Intelligence: wearable interfaces that fuse
three complementary "intelligences" (1) biological
intelligence, to infer human state and intent by modeling how people
naturally think and behave; (2) physical/embodied intelligence, which
understands body dynamics and real-world physics so interaction and
assistance remain safe and effective; and (3) spatial intelligence, to
leverage nearby sensors, IoT devices and smart environments for
continuous situational awareness. These capabilities are further
amplified by generative AI - models trained on large-scale data that
can integrate heterogeneous signals to predict, reason, and
adapt - enabling systems to become more personalized and
context-aware. Together, these capabilities can overcome the limits of
noisy on-body sensing and constrained wearable computing by turning
fragmented measurements into coherent, actionable assistance.
Our vision is an AI that operates quietly in the background, like an
invisible artificial cortex running in parallel with your own cerebral
cortex and coordinating across different lobes: not simply following
rules, but learning your patterns, anticipating your needs, and
adjusting in real time. The result is 24/7 support for thinking,
decision-making, and physical action that feels intuitive, seamless,
and low in cognitive effort.
Beyond new algorithms, we also emphasize human-centered evaluation,
trust, accessibility, and inclusive augmentation, with the goal of
accelerating research that advances wearable computing into reliable,
scalable, and equitable systems - technologies that truly co-evolve
with users over time.
Topics of Interest
We welcome research papers, short/WiP papers, and position/vision
papers on (but not limited to) the following areas aligned with
multimodal perception-decision-action cycles, shared autonomy,
personalization, safety, and real-world robustness on wearable
technologies.
(A) Biological intelligence: sensing human state & intent
? Multimodal biosensing: EEG/EMG/ECG/EDA/PPG/respiration, inertial +
physiological fusion
? Robust biosignal decoding in-the-wild: drift handling, motion
artifacts, missing data, calibration-free methods
? Intent recognition and user-state estimation (fatigue, stress,
attention, readiness, motor intent)
? Personalized adaptation across users: domain adaptation, continual
learning, few-shot personalization
? Privacy-preserving on-body learning and secure biosignal pipelines
(B) Physical intelligence: embodied assistance & safe action
? Embodied AI for wearable augmentation: biomechanics, dynamics,
control, and safe shared autonomy
? Wearable robotics and human–robot physical interaction
(exoskeletons, prostheses, assistive devices)
? Safety, stability, and fail-safe design in closed-loop wearable control
? Human factors and ergonomics for physical assistance; workload-aware support
? Verification/validation of embodied policies for assistive wearable systems
(C) Spatial intelligence: context from environments, robots & smart IoT
? Context-aware wearable computing using smart environments, robots,
and IoT integration
? Scene understanding for assistance: activity context, objects,
layout, hazards, and social context
? Multi-device sensing orchestration (wearable + phone + AR + ambient sensors)
? Real-world robustness and interoperability across heterogeneous devices
(D) Generative AI & foundation/world models for wearables
? Generative AI / foundation models for wearable time-series,
biosignal representation learning, multimodal fusion
? World models for prediction, planning, and co-adaptation in
human-centered wearable interaction
? Edge/on-body inference: efficiency, compression, distillation, and
low-power deployment
? Uncertainty-aware assistance, reliable decision-making, and
"AI-in-the-background" support
(E) Human-centered NeuroDesign, evaluation & impact
? Human-centered interface design: intuitive/subconscious interaction,
trust, transparency, explainability
? UX evaluation in real-world settings: accessibility, equity,
inclusive augmentation
? Ethical, privacy, and governance considerations for always-on
wearable intelligence
? Application domains: assistive augmentation, rehabilitation, health
monitoring, everyday support
Submission Instructions
? Submission deadline: March 22, 2026
? Submission site at 2026 IEEE International Conference on Systems,
Man, and Cybernetics (Papercept):
https://conf.papercept.net/conferences/scripts/start.pl
? Submission code: dt3i1
(Use the submission code "dt3i1" during the Papercept submission
process to route your paper to this workshop.)
? Please follow the IEEE SMC 2026 submission guidelines and formatting
requirements.
IEEE SMC 2026 submission guidelines:
https://www.ieeesmc2026.org/call-for-papers