An innovative assistive technology prototype developed for LSU's Assistive Robotics Course. This smart cane combines LiDAR obstacle detection with AI-powered car recognition to provide comprehensive navigation assistance for visually impaired users through haptic feedback.
This project was developed as part of LSU's Assistive Robotics Course, where our team created a prototype smart cane designed to enhance mobility and safety for visually impaired users. The cane integrates multiple sensor technologies and AI algorithms to provide real-time environmental awareness through intuitive haptic feedback.
The system addresses two critical navigation challenges: obstacle detection for immediate hazards and vehicle detection for traffic safety. By combining LiDAR technology with computer vision, the smart cane offers a comprehensive solution that goes beyond traditional white canes.
LiDAR sensor provides 360° obstacle detection with 1.5m optimized range (reduced from 6m practical range for user clarity), alerting users through vibration feedback for immediate hazards.
YOLO neural network detects approaching vehicles and determines their direction, providing directional haptic feedback through the moveable sleeve.
Dual feedback system: vibration for obstacles and directional tapping for vehicle approach, providing intuitive navigation assistance.
Combines LiDAR and computer vision data for comprehensive environmental awareness and intelligent decision-making.
Challenge: The LiDAR sensor had a theoretical range of 12m and practical range of 6m, but longer detection distances would confuse users about obstacle location.
Solution: Optimized the detection range to 1.5m to provide clear, actionable feedback about immediate obstacles without overwhelming users with distant objects.
Challenge: Working with a very limited selection of available parts in the lab, including a wheel with bumps that created unwanted vibrations during movement.
Solution: The bumpy wheel's vibrations masked the rumble motor feedback, requiring careful signal processing and vibration pattern differentiation to ensure obstacle alerts remained distinguishable.
Challenge: The YOLO neural network could detect cars but couldn't distinguish between moving and stationary vehicles, leaving users uncertain about traffic flow.
Solution: This limitation highlighted the need for additional motion tracking algorithms or temporal analysis to provide more comprehensive traffic awareness for future iterations.
Challenge: Processing LiDAR and camera data simultaneously while maintaining low latency for safety-critical feedback using ROS 2 architecture.
Solution: Leveraged ROS 2's modular design and optimized Python algorithms to ensure sub-second response times while maintaining system reliability and modularity.
Challenge: Creating intuitive haptic feedback that users can quickly interpret while walking, especially with competing vibration sources.
Solution: Developed distinct vibration patterns for obstacles and directional tapping system for vehicle approach, with careful frequency and intensity differentiation to overcome background noise.
The smart cane prototype was successfully developed and presented to the class as part of LSU's Assistive Robotics Course. The project demonstrated effective integration of multiple sensor technologies and received positive feedback for its innovative approach to assistive technology design.
Key achievements included successful obstacle detection within the optimized 1.5m range, functional car detection using YOLO neural networks, and intuitive haptic feedback through the moveable sleeve mechanism. The project showcased practical applications of machine learning in accessibility technology, combining computer vision, LiDAR sensing, and haptic feedback to create a more comprehensive navigation aid.
The dual-sensor approach addressed both immediate obstacle detection and broader environmental awareness, providing users with enhanced confidence and safety during navigation. This work contributes to the growing field of assistive robotics and demonstrates the importance of user-centered design in accessibility technology.