The visually impaired are consistently faced with mobility restrictions due to the lack of truly accessible environments. Even in structured settings, people with low vision may still have trouble navigating efficiently and safely due to hallway and threshold ambiguity. Assistive technologies that are currently available do not provide door and door-handle object detections nor do they concretely help the visually impaired reaching towards the object. In this paper, we propose an AI-driven wearable assistive technology that integrates door handle detection, user's real-time hand position in relation to this targeted object, and audio feedback for 'joy stick-like command' for acquisition of the target and subsequent hand-to-handle manipulation. When fully envisioned, this platform will help end users locate doors and door handles and reach them with feedback, enabling them to travel safely and efficiently when navigating through environments with thresholds. Compared to the usual computer vision models, the one proposed in this paper requires significantly fewer computational resources, which allows it to pair with a stereoscopic camera running on a small graphics processing unit (GPU). This permits us to take advantage of its convenient portability. We also introduce a dataset containing different types of door handles and door knobs with bounding-box annotations, which can be used for training and testing in future research.