Soft nonholonomic constraints: Theory and applications to optimal control

Determining the time-optimal trajectories for different robotic platforms is a fundamental problem in robotics, yet it has not been fully explored. To simplify the problem, researchers have opted to use kinematic models of motion. Even though this approach simplified the characterization of optimal trajectories (in some cases exact analytical solutions were found), this came at the expense of feasibility of such trajectories. To allow for more feasible trajectories, several researchers have attempted to employ dynamic models, yet these yield optimal control solutions that involve chattering; an infinite number of control switches within a finite amount of time. This problem remains unsolved and is still under investigation. In this research, we tackle the problem of chattering by modifying the motion model, and in particular the nonholonomic constraints describing the wheel-ground contact interactions. We propose to relax the constraints and allow for skidding in the model. The manner with which the constraints are relaxed allows for the optimal control problem to remain susceptible to analysis using Pontryagin's Principle. We refer to these constraints as soft nonholonomic constraints. Results show that incorporating these constraints into the model rules out chattering solutions from the optimal control.

By: Salah Bazzi

Humanoid Fall Avoidance

Humanoid fall avoidance is important in protecting the robot against breakage and saving time needed to recover from a fall. Humanoids come across various disturbances that could be harmful if not treated properly. Several humanoids are meant to operate around humans, so bumps and trips are inevitable. If no fall avoidance methods are applied, there exists a risk of harm to the individuals in the environment and damage to the robot thus requiring costly maintenance. Most fall avoidance strategies rely on single sensors that measure angular position or velocity of the robot's Centre of Mass (CoM). However, in everyday operations, these sensors are prone to noise that render the measurements inaccurate and deceptive. For example, if the robot is standing on an uneven terrain, the proprioceptive sensor alone will not give a reliable evaluation of balance. As a result, there is a dire need for improved fall avoidance strategies to ensure the safety of the humanoids and humans working in the same environments.

By: Noel Maalouf

Object-Oriented Structure from Motion

One popular approach to 3D reconstruction, which relies on a monocular camera, is what is well-known as Structure from Motion (SfM). Starting from 2D images of a scene, SfM recovers both the scene structure and the camera trajectory inside it. In spite of the significant progress achieved in SfM in the past decades, the structures that are obtained still lack the quality of the reconstruction obtained through laser scanning of objects for example, and in many cases require manual labor-intensive post-processing of the point cloud before they can be used in practice. The aim of our research is, using a minimal amount of input from the user, to improve the structure estimation part of SfM by treating points in the scene non-uniformly, whereby the major focus would be on pertinent objects in the scene, leading to what we call Object-Oriented SfM.

By: Rahaf Rahal

Towards Fully Autonomous Self-Supervised Free Space Estimation

Fully autonomous free space estimation is considered one of the holy grails of robotics research. Despite an abundance of algorithms tackling this problem, it is still considered an unsolved problem mainly due to the particularity of proposed algorithms to certain environments. This phenomenon is much more prominent in environments where the properties of free space vary, whether the variation was spatially or temporally. Algorithms tailored specifically to a certain environment are expected to perform rather poorly when the properties of the environment change. This project aims to solve this problem through developing a sustainable system able to reliably navigate harsh dynamic environments for a long time.

By: Ali Harakeh

Work continued by: Mahmoud Hamandi

Pedestrian Detection

After the breakthrough in face detection by Viola Jones in 2001, pedestrian detection was logically to follow, as most robotic systems have to interact with humans. Although this problem has been tackled thoroughly from 2003 till today, it is still far from being resolved. Our first step will be to asses the state of the art. After that, we will implement available algorithms reliably, in a safety related engineering project, whilst searching for possible enhancements.

By: Mahmoud Hamandi

Ground Vehicles Driver Assistance and Active Safety Control Systems

Vehicle roll-over and skidding are the main contributors of accidents and crashes on the road, which underline the importance of active control systems to enhance the safety of ground vehicles. Automotive active safety control systems assist the driver by preventing or mitigating the loss of maneuverability by quickly and judiciously intervening at the limits of vehicle handling. Three subsystems: steering, suspension, and braking (or driveline) can play an active role in affecting the vehicle's stability when electronically controlled. The aim of this research is to synthesize controllers that integrate and coordinate the intervention of the braking, steering, and suspension actuators to improve vehicle stability depending on the driving conditions. The goal is to fully exploit the collaboration potential between the subsystems to create a harmonious vehicle system. Since each controller operates on a particular objective, this may lead to conflict and result in degradation of the overall performance. A supervisory chassis controller insures that the three controlled subsystems interact, simultaneously, to reach a common objective.

By: Carine Bardawil

Advanced Control Strategies for Unmanned Aerial Vehicles

The design and control of unmanned aerial vehicles (UAVs) is an active research area given the popularity and complexity of these machines. This research focuses on the design of control algorithms to control the motion and trajectory-following performance of a quadrotor. Quadrotors are operated in environments with varying conditions and parametric uncertainties, such as sudden/gradual mass fluctuation when transporting / discharging objects, aerodynamic changes (wind gusts), variation in the center of mass position, to name a few. The aim of this research is to tackle the problem of parametric uncertainty and unmodeled nonlinearities via the design of adaptive control laws, which are validated in simulation on a high-fidelity nonlinear dynamic model, and experimentally on an available quadrotor platform.

By: Mohammad Jawad Lakis

Teleoperation of UAV with Haptic Feedback

Unmanned aerial vehicles widely known as UAVs are specific type of flying machines that do not require human pilot onboard and can be remotely operated and controlled. In our research we are focusing mainly on the micro aerial vehicles and specifically the quadrotor (quadcopter) family. Because quadrotors are underactuated systems, a great understanding of aerodynamics must be acquainted regardless whether the system is autonomously or manually driven. As a result, precise remote control of quadrotors presents a challenging task because of the inherent loss of sensory perception during flight over a fast varying environment. Our research proposes a new technique to teleoperate quadrotors using haptic devices with force feedback. The control method aims at facilitating the flight process of UAVs by making it more natural and intuitive.

By: Ali Kanso

Autonomous Underwater Vehicle for Monitoring of Maritime Pollution

The goal of this research is to develop a conceptual hybrid autonomous underwater vehicle (H-AUV) which combines the features of both a propelled underwater vehicle and those of an underwater glider. The platform we in-tend to build is to be equipped with chemical sensors for in-situ monitoringof maritime pollution, and performs underwater missions controlled by on-board computers with no need for interaction with a human operator. Our Frst objective is to provide a fully developed mechanical design of a concep-tual hybrid autonomous underwater vehicle. A 3-d transparent view of the vehicle is shown in Fig. 1. Next, we develop its dynamic model and performseveral simulations to showcase its locomotive capabilities in both propulsionand gliding modes.On the other hand, we address the path planning problem of the au-tonomous underwater vehicle in a three dimensional space, with a constrainton minimum turning radius in heading and pitch curvatures, and maximum pitch angle. Given an initial and fnal confgurations, we present a methodfor generating a feasible trajectory with minimum length linking the two confgurations, by using the concept of the Dubins theory. The 3D paths arecomputed geometrically by using the vehicle's kinematics and thus makingthis method easily implemented on underwater vehicles, and much more en-cient than computational path planning methods. An example of a 3-d pathgenerated by our proposed method is provided in Fig. 2

By: Bilal Wehbe

Occlusion detection & handling in monocular SLAM

Real time monocular SLAM has been a problem under study around a decade now; with its first breakthrough with the introduction of filter based techniques(scenelib in 2004). back then the tracked map was nothing but a dozen of manually selected features that didn't infer much information regarding the tracked scene. Released in 2007, PTAM layed the ground work for SFM based techniques to be used as a successful monocular slam solution allowing extraction and tracking of thousands of features in realtime. But, still, the tracked features are nothing more than a point cloud that cannot be interpreted of any meaning for describing the scene's geometry. With the introduction of image based techniques, DTAM was able to achieve pixel based tracking and mapping allowing a dense depth map of the scene to be created in realtime. But this technique requires a state of the art GPU to be able to process its immense computational cost. Our work is based on PTAM, i.e. sparse sfm based technique that allows real time tracking and mapping of a scene based on a sparse set of extracted features. the contribution of our work to ptam is a bottom-up approach for surface estimation of the scene; knowledge of the scene allows us to track the camera's pose within the scene and allows us to estimate occluded regions from certain point of views so that they are properly handled.

By: Georges Younes

Depth estimation from edge and blur estimation

The standard approach to edge detection is based on a model of edges as large step changes in intensity. This approach fails to reliably detect and localize edges in natural images where blur scale and contrast can vary over a broad range. The main problem is that the appropriate spatial scale for local estimation depends upon the local structure of the edge, and thus varies unpredictably over the image. Here we show how we can estimate the depth or distances of objects in a scene based on images formed by lenses using the proposed edge detection method. The recovery is based on measuring the change in the scene's image due to a known change in the three intrinsic camera parameters: (i) distance between the lens and the image detector, (ii) focal length of the lens, and (iii) diameter of the lens aperture. We show that edges spanning a broad range of blur scales and contrasts can be recovered accurately by a single system with no input parameters other than the second moment of the sensor noise. A natural dividend of this approach is a measure of the thickness of contours which can be used to estimate focal and penumbral blur. Local scale control is shown to be important for the estimation of blur in complex images, where the potential for interference between nearby edges of very different blur scale requires that estimates be made at the minimum reliable scale.

By: Hussein Jlailaty

Design and Modeling of a Novel Single-Actuator Differentially Driven Robot

This project aim to design and model a differentially driven robot using a single actuator. The differential drive is done by varying wheel diameters rather than angular speeds. The wheels are modeled as springs such that a mass moves along the axis of the robot to apply more tension in one of those springs than the other, thus changing the wheel diameters. This allows for the velocity of each wheel to differ based on the position of this mass. For forward motion, the mass acts as a pendulum, rotating about the axis of the robot allowing for velocity in the forward or backward direction due to inertial forces. Basically the wheels aren't directly actuated but rotate due to the dynamics of the system. The full set of generalized coordinates constitute 2 planar position coordinates (x,y), 4 angles: steering angle θ, inclination angle β, actuated angle α, and wheel angle φ, and the mass pendulum distance from the center of the robot "d". These can be, hopefully, later reduced due to the effects of d and α on β and φ, so that the final system can be completely modeled with the pose coordinates and the input. Figures 1 and 2 show proposed designs for the robot. Figure 2 will be updated as the design has changed for a pendulum with a single mass to a pendulum with two opposing masses, or a disk mass.

By: Mohamad Alsalman