Smart Cane for Visually Impaired

An innovative assistive technology prototype developed for LSU's Assistive Robotics Course. This smart cane combines LiDAR obstacle detection with AI-powered car recognition to provide comprehensive navigation assistance for visually impaired users through haptic feedback.

LiDAR YOLO Neural Networks Raspberry Pi Python ROS 2 OAK-D Lite Haptic Feedback
Smart Cane for Visually Impaired

Project Overview

This project was developed as part of LSU's Assistive Robotics Course, where our team created a prototype smart cane designed to enhance mobility and safety for visually impaired users. The cane integrates multiple sensor technologies and AI algorithms to provide real-time environmental awareness through intuitive haptic feedback.

The system addresses two critical navigation challenges: obstacle detection for immediate hazards and vehicle detection for traffic safety. By combining LiDAR technology with computer vision, the smart cane offers a comprehensive solution that goes beyond traditional white canes.

Technical Implementation

Sensor Architecture

  • RP LiDAR Sensor: 12m theoretical range, 6m practical range, optimized to 1.5m for user clarity
  • OAK-D Lite Depth Camera: Mounted near the grip for computer vision and depth perception
  • Rumble Motor: Integrated for immediate obstacle alerts

AI & Processing

  • YOLO Neural Network: Real-time car detection and classification
  • Raspberry Pi: Central processing unit for sensor fusion and decision making
  • ROS 2: Robot Operating System for modular sensor integration and communication
  • Direction Analysis: Determines vehicle approach direction for targeted feedback

Haptic Feedback System

  • Moveable Sleeve: Translates directional information into physical taps
  • Vibration Alerts: Immediate notification for obstacles within detection range
  • Directional Tapping: Indicates car direction through sleeve movement

Power & Mobility

  • Power Bank: Portable energy source for extended use
  • Wheel Base: Smooth mobility enhancement at cane tip
  • Modular Design: Easy maintenance and component replacement

Key Features

Obstacle Detection

LiDAR sensor provides 360° obstacle detection with 1.5m optimized range (reduced from 6m practical range for user clarity), alerting users through vibration feedback for immediate hazards.

Vehicle Recognition

YOLO neural network detects approaching vehicles and determines their direction, providing directional haptic feedback through the moveable sleeve.

Haptic Feedback

Dual feedback system: vibration for obstacles and directional tapping for vehicle approach, providing intuitive navigation assistance.

Sensor Fusion

Combines LiDAR and computer vision data for comprehensive environmental awareness and intelligent decision-making.

Technical Challenges & Solutions

LiDAR Range Optimization

Challenge: The LiDAR sensor had a theoretical range of 12m and practical range of 6m, but longer detection distances would confuse users about obstacle location.

Solution: Optimized the detection range to 1.5m to provide clear, actionable feedback about immediate obstacles without overwhelming users with distant objects.

Limited Component Selection

Challenge: Working with a very limited selection of available parts in the lab, including a wheel with bumps that created unwanted vibrations during movement.

Solution: The bumpy wheel's vibrations masked the rumble motor feedback, requiring careful signal processing and vibration pattern differentiation to ensure obstacle alerts remained distinguishable.

Vehicle Motion Detection

Challenge: The YOLO neural network could detect cars but couldn't distinguish between moving and stationary vehicles, leaving users uncertain about traffic flow.

Solution: This limitation highlighted the need for additional motion tracking algorithms or temporal analysis to provide more comprehensive traffic awareness for future iterations.

Real-time Processing with ROS 2

Challenge: Processing LiDAR and camera data simultaneously while maintaining low latency for safety-critical feedback using ROS 2 architecture.

Solution: Leveraged ROS 2's modular design and optimized Python algorithms to ensure sub-second response times while maintaining system reliability and modularity.

Haptic Interface Design

Challenge: Creating intuitive haptic feedback that users can quickly interpret while walking, especially with competing vibration sources.

Solution: Developed distinct vibration patterns for obstacles and directional tapping system for vehicle approach, with careful frequency and intensity differentiation to overcome background noise.

Project Results & Presentation

The smart cane prototype was successfully developed and presented to the class as part of LSU's Assistive Robotics Course. The project demonstrated effective integration of multiple sensor technologies and received positive feedback for its innovative approach to assistive technology design.

Key achievements included successful obstacle detection within the optimized 1.5m range, functional car detection using YOLO neural networks, and intuitive haptic feedback through the moveable sleeve mechanism. The project showcased practical applications of machine learning in accessibility technology, combining computer vision, LiDAR sensing, and haptic feedback to create a more comprehensive navigation aid.

The dual-sensor approach addressed both immediate obstacle detection and broader environmental awareness, providing users with enhanced confidence and safety during navigation. This work contributes to the growing field of assistive robotics and demonstrates the importance of user-centered design in accessibility technology.