Quasi AI – On The Robot

Beautiful on the inside

ROS2 Robot System

At Quasi Robotics, we have chosen the Robot Operating System 2 (ROS) as a set of software libraries and tools to help us build robot applications. From drivers to state-of-the-art algorithms and with powerful developer tools, ROS has everything we need for R2 robotic platform.

Robotic Operating System logo

Using ROS 2 as a robotics middleware suite enables Quasi AI to be hardware abstract and yet get low-level device access with reactivity and low latency in robot control.

Quasi AI is a highly parallel application, running sets of processes in a graph architecture where processing takes place in core nodes that receive, post, and multiplex sensor data, control, state, planning, motor, actuator, and other messages.

Learn more about Quasi AI Level 1

Manipulator Path Planning and Trajectory Planning

Quasi AI implements various AI algorithms for motion planning, inverse kinematics, manipulation, 3D perception, and collision checking to calculate optimal movements for the manipulator (robotic arm and gripper) poses in 3D spaces relative to where the target object is located.

Once calculated, time-parameterized joint trajectories are executed by our proprietary motion controllers. Quasi AI sends each R2 motor speed and position commands and monitors motor feedback to cross-check calculated versus actual positions and compensate variations accordingly.

Quasi AI monitors motor temperatures, voltage, and currents to detect motor stalling due to collision(s) or other malfunctions and terminate trajectory execution when the allowed threshold is exceeded.

Model R2 Robot gripper view

Intelligent Automation Provides
Embedded Computer Stereo Vision

Quasi AI relies on time-of-flight sensors and stereo depth cameras to avoid collisions, perform object detection, and aid in navigation and localization in the Environments.

At Quasi, we’ve developed our own perception framework for real-time object recognition tasks. We’ve implemented fully convolutional single-stage detectors, achieving the best speed/accuracy and state-of-the-art performance on instance segmentation and rotated object detection tasks.

Quasi AI is optimized for specific CPU board capabilities for deep learning models and deployment using an inference engine onto specialized hardware.

Quasi AI generated 3D Map of the environment

AMR Robotic
Object Recognition and Object Detection

Quasi AI sophisticated object detection is built around a segmentation of stereo depth camera point clouds to detect individual objects, matching the desired objects using a video stream from the camera and projecting recognized objects from 2D images to 3D space.

Robotic arm of Quasi Model R2 manipulator robot identifying objects within its environment with stereo vision and Q.AI Intelligence

Once the desired object is detected, the Quasi AI grasp detection algorithm takes over – finding the best gripper position to acquire the item. From there, Quasi AI via inverse kinematics detects the best possible arm pose and approach to get the item and to retrieve it avoiding collisions.

We trained the Quasi AI detection pipeline using state-of-the-art CNN-based (convolutional neural network) deep learning grasp detection algorithms for intelligent visual grasp in many R2 robot usage scenarios. Furthermore, these detection algorithms allow quick and easy introduction of new, previously unseen objects into the pipeline by the end user of R2 robots.

Intelligent Reporting

Intelligent Reporting

Another side of Quasi AI data processing capabilities is its intelligent reporting. Next to the ability to process enormous streams of data from various inputs, we’ve added a generalization and learning block.

Quasi AI monitors patterns in data streams and learns to extract information relevant for reporting, auditing and dashboard visualizations.

Quasi AI Training charts

More about Q.AI in the Cloud

Discover More