Action and intention recognition of pedestrians inurban settings are challenging problems for Advanced DriverAssistance Systems as well as future autonomous vehicles tomaintain smooth and safe traffic. This work investigates anumber of feature extraction methods in combination withseveral machine learning algorithms to build knowledge on howto automatically detect the action and intention of pedestriansin urban traffic. We focus on the motion and head orientationto predict whether the pedestrian is about to cross the street ornot. The work is based on the Joint Attention for AutonomousDriving (JAAD) dataset, which contains 346 videoclips ofvarious traffic scenarios captured with cameras mounted in thewindshield of a car. An accuracy of 72% for head orientationestimation and 85% for motion detection is obtained in our experiments.