Change search
Link to record
Permanent link

Direct link
BETA
Publications (10 of 54) Show all publications
Varytimidis, D., Alonso-Fernandez, F., Duran, B. & Englund, C. (2019). Action and Intention Recognition of Pedestrians in Urban Traffic. In: Proceedings - 14th International Conference on Signal Image Technology and Internet Based Systems, SITIS 2018: . Paper presented at 14th International Conference on Signal Image Technology and Internet Based Systems, SITIS 2018, 26 November 2018 through 29 November 2018 (pp. 676-682). Institute of Electrical and Electronics Engineers Inc.
Open this publication in new window or tab >>Action and Intention Recognition of Pedestrians in Urban Traffic
2019 (English)In: Proceedings - 14th International Conference on Signal Image Technology and Internet Based Systems, SITIS 2018, Institute of Electrical and Electronics Engineers Inc. , 2019, p. 676-682Conference paper, Published paper (Refereed)
Abstract [en]

Action and intention recognition of pedestrians in urban settings are challenging problems for Advanced Driver Assistance Systems as well as future autonomous vehicles to maintain smooth and safe traffic. This work investigates a number of feature extraction methods in combination with several machine learning algorithms to build knowledge on how to automatically detect the action and intention of pedestrians in urban traffic. We focus on the motion and head orientation to predict whether the pedestrian is about to cross the street or not. The work is based on the Joint Attention for Autonomous Driving (JAAD) dataset, which contains 346 videoclips of various traffic scenarios captured with cameras mounted in the windshield of a car. An accuracy of 72% for head orientation estimation and 85% for motion detection is obtained in our experiments.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2019
Keywords
Action recognition, Driver assistance, Intention recognition, Pedestrian, Traffic, Automobile drivers, Autonomous vehicles, Feature extraction, Learning algorithms, Machine learning, Telecommunication traffic, Traffic signals, Action and intention, Autonomous driving, Feature extraction methods, Head orientation estimations, Advanced driver assistance systems
National Category
Natural Sciences
Identifiers
urn:nbn:se:ri:diva-38894 (URN)10.1109/SITIS.2018.00109 (DOI)2-s2.0-85065906502 (Scopus ID)9781538693858 (ISBN)
Conference
14th International Conference on Signal Image Technology and Internet Based Systems, SITIS 2018, 26 November 2018 through 29 November 2018
Note

Funding details: 20140220; Funding details: Vetenskapsrådet; Funding details: VINNOVA; Funding details: American Institutes for Research; Funding text 1: This work is financed by the SIDUS AIR project of the Swedish Knowledge Foundation under the grant agreement number 20140220. Author F. A.-F. also thanks the Swedish Research Council (VR), and the Sweden’s innovation agency (VINNOVA).

Available from: 2019-06-10 Created: 2019-06-10 Last updated: 2019-06-10Bibliographically approved
Alonso-Fernandez, F., Bigun, J. & Englund, C. (2019). Expression Recognition Using the Periocular Region: A Feasibility Study. In: Proceedings - 14th International Conference on Signal Image Technology and Internet Based Systems, SITIS 2018: . Paper presented at 14th International Conference on Signal Image Technology and Internet Based Systems, SITIS 2018, 26 November 2018 through 29 November 2018 (pp. 536-541). Institute of Electrical and Electronics Engineers Inc.
Open this publication in new window or tab >>Expression Recognition Using the Periocular Region: A Feasibility Study
2019 (English)In: Proceedings - 14th International Conference on Signal Image Technology and Internet Based Systems, SITIS 2018, Institute of Electrical and Electronics Engineers Inc. , 2019, p. 536-541Conference paper, Published paper (Refereed)
Abstract [en]

This paper investigates the feasibility of using the periocular region for expression recognition. Most works have tried to solve this by analyzing the whole face. Periocular is the facial region in the immediate vicinity of the eye. It has the advantage of being available over a wide range of distances and under partial face occlusion, thus making it suitable for unconstrained or uncooperative scenarios. We evaluate five different image descriptors on a dataset of 1,574 images from 118 subjects. The experimental results show an average/overall accuracy of 67.0/78.0% by fusion of several descriptors. While this accuracy is still behind that attained with full-face methods, it is noteworthy to mention that our initial approach employs only one frame to predict the expression, in contraposition to state of the art, exploiting several order more data comprising spatial-temporal data which is often not available.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2019
Keywords
Emotion recognition, Expression recognition, Periocular analysis, Periocular descriptor, Descriptors, Feasibility studies, Full face method, Image descriptors, Periocular, Spatial-temporal data
National Category
Natural Sciences
Identifiers
urn:nbn:se:ri:diva-38895 (URN)10.1109/SITIS.2018.00087 (DOI)2-s2.0-85065903319 (Scopus ID)9781538693858 (ISBN)
Conference
14th International Conference on Signal Image Technology and Internet Based Systems, SITIS 2018, 26 November 2018 through 29 November 2018
Note

 Funding details: Vetenskapsrådet; Funding text 1: Author F. A.-F. thanks the Swedish Research Council for funding his research. Authors acknowledge the CAISR program and the SIDUS-AIR project of the Swedish Knowledge Foundation.

Available from: 2019-06-10 Created: 2019-06-10 Last updated: 2019-06-10Bibliographically approved
Habibovic, A., Andersson, J., Malmsten Lundgren, V., Klingegård, M., Englund, C. & Larsson, S. (2019). External Vehicle Interfaces for Communication with Other Road Users?. In: Gereon Meyer, Sven Beiker (Ed.), Meyer, Gereon; Beiker, Sven (Ed.), Road Vehicle Automation 5: . Paper presented at Road Vehicle Automation 5 (pp. 91-102). Paper presented at Road Vehicle Automation 5.
Open this publication in new window or tab >>External Vehicle Interfaces for Communication with Other Road Users?
Show others...
2019 (English)In: Road Vehicle Automation 5 / [ed] Gereon Meyer, Sven Beiker, 2019, p. 91-102Chapter in book (Refereed)
Abstract [en]

How to ensure trust and societal acceptance of automated vehicles (AVs) is a widely-discussed topic today. While trust and acceptance could be influenced by a range of factors, one thing is sure: the ability of AVs to safely and smoothly interact with other road users will play a key role. Based on our experiences from a series of studies, this paper elaborates on issues that AVs may face in interactions with other road users and whether external vehicle interfaces could support these interactions. Our overall conclusion is that such interfaces may be beneficial in situations where negotiation is needed. However, these benefits, and potential drawbacks, need to be further explored to create a common language, or standard, for how AVs should communicate with other road users.

National Category
Natural Sciences
Identifiers
urn:nbn:se:ri:diva-37616 (URN)
Conference
Road Vehicle Automation 5
Note

The Automated Vehicles Symposium 2017

Part of the Lecture Notes in Mobility book series (LNMOB)

Available from: 2019-01-29 Created: 2019-01-29 Last updated: 2019-06-27Bibliographically approved
Sprei, F., Habibi, S., Englund, C., Pettersson, S., Voronov, A. & Wedlin, J. (2019). Free-floating car-sharing electrification and mode displacement: Travel time and usage patterns from 12 cities in Europe and the United States. Transportation Research Part D: Transport and Environment, 71, 127-140
Open this publication in new window or tab >>Free-floating car-sharing electrification and mode displacement: Travel time and usage patterns from 12 cities in Europe and the United States
Show others...
2019 (English)In: Transportation Research Part D: Transport and Environment, ISSN 1361-9209, E-ISSN 1879-2340, Vol. 71, p. 127-140Article in journal (Refereed) Published
Abstract [en]

Free-floating car-sharing (FFCS) allows users to book a vehicle through their phone, use it and return it anywhere within a designated area in the city. FFCS has the potential to contribute to a transition to low-carbon mobility if the vehicles are electric, and if the usage does not displace active travel or public transport use. The aim of this paper is to study what travel time and usage patterns of the vehicles among the early adopters of the service reveal about these two issues. We base our analysis on a dataset containing rentals from 2014 to 2017, for 12 cities in Europe and the United States. For seven of these cities, we have collected travel times for equivalent trips with walking, biking, public transport and private car. FFCS services are mainly used for shorter trips with a median rental time of 27 min and actual driving time closer to 15 min. When comparing FFCS with other transport modes, we find that rental times are generally shorter than the equivalent walking time but longer than cycling. For public transport, the picture is mixed: for some trips there is no major time gain from taking FFCS, for others it could be up to 30 min. For electric FFCS vehicles rental time is shorter and the number of rentals per car and day are slightly fewer compared to conventional vehicles. Still, evidence from cities with an only electric fleet show that these services can be electrified and reach high levels of utilization.

Keywords
Alternative trips, Electric vehicles, Free-floating car-sharing, Shared mobility, Travel time, Usage patterns, Vehicles, Floating car, Low carbon, Mode-displacements, Public transport, Time gain, Transport modes
National Category
Natural Sciences
Identifiers
urn:nbn:se:ri:diva-36926 (URN)10.1016/j.trd.2018.12.018 (DOI)2-s2.0-85058802098 (Scopus ID)
Available from: 2018-12-28 Created: 2018-12-28 Last updated: 2019-06-28Bibliographically approved
Borg, M., Englund, C., Wnuk, K., Duran, B., Levandowski, C., Gao, S., . . . Törnqvist, J. (2019). Safely Entering the Deep: A Review of Verification and Validation for Machine Learning and a Challenge Elicitation in the Automotive Industry. Journal of Automotive Software Engineering, 1(1), 1-13
Open this publication in new window or tab >>Safely Entering the Deep: A Review of Verification and Validation for Machine Learning and a Challenge Elicitation in the Automotive Industry
Show others...
2019 (English)In: Journal of Automotive Software Engineering, Vol. 1, no 1, p. 1-13Article in journal (Refereed) Published
Abstract [en]

Deep neural networks (DNNs) will emerge as a cornerstone in automotive software engineering. However, developing systems with DNNs introduces novel challenges for safety assessments. This paper reviews the state-of-the-art in verification and validation of safety-critical systems that rely on machine learning. Furthermore, we report from a workshop series on DNNs for perception with automotive experts in Sweden, confirming that ISO 26262 largely contravenes the nature of DNNs. We recommend aerospace-to-automotive knowledge transfer and systems-based safety approaches, for example, safety cage architectures and simulated system test cases.

Keywords
Deep learning, Safety-critical systems, Machine learning, Verification and validation, ISO 26262
National Category
Natural Sciences
Identifiers
urn:nbn:se:ri:diva-39335 (URN)10.2991/jase.d.190131.001 (DOI)
Available from: 2019-07-05 Created: 2019-07-05 Last updated: 2019-07-05Bibliographically approved
Henriksson, J., Berger, C., Borg, M., Tornberg, L., Englund, C., Sathyamoorthy, S. & Ursing, S. (2019). Towards Structured Evaluation of Deep Neural Network Supervisors. In: Proceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019: . Paper presented at 1st IEEE International Conference on Artificial Intelligence Testing, AITest 2019, 4 April 2019 through 9 April 2019 (pp. 27-34). Institute of Electrical and Electronics Engineers Inc.
Open this publication in new window or tab >>Towards Structured Evaluation of Deep Neural Network Supervisors
Show others...
2019 (English)In: Proceedings - 2019 IEEE International Conference on Artificial Intelligence Testing, AITest 2019, Institute of Electrical and Electronics Engineers Inc. , 2019, p. 27-34Conference paper, Published paper (Refereed)
Abstract [en]

Deep Neural Networks (DNN) have improved the quality of several non-safety related products in the past years. However, before DNNs should be deployed to safety-critical applications, their robustness needs to be systematically analyzed. A common challenge for DNNs occurs when input is dissimilar to the training set, which might lead to high confidence predictions despite proper knowledge of the input. Several previous studies have proposed to complement DNNs with a supervisor that detects when inputs are outside the scope of the network. Most of these supervisors, however, are developed and tested for a selected scenario using a specific performance metric. In this work, we emphasize the need to assess and compare the performance of supervisors in a structured way. We present a framework constituted by four datasets organized in six test cases combined with seven evaluation metrics. The test cases provide varying complexity and include data from publicly available sources as well as a novel dataset consisting of images from simulated driving scenarios. The latter we plan to make publicly available. Our framework can be used to support DNN supervisor evaluation, which in turn could be used to motive development, validation, and deployment of DNNs in safety-critical applications.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2019
Keywords
Automotive perception, Deep neural networks, Out-of-distribution, Robustness, Supervisor, Artificial intelligence, Robustness (control systems), Safety engineering, Statistical tests, Supervisory personnel, Evaluation metrics, High confidence, Performance metrices, Safety critical applications, Safety-related products, Simulated driving, Structured evaluation
National Category
Natural Sciences
Identifiers
urn:nbn:se:ri:diva-39270 (URN)10.1109/AITest.2019.00-12 (DOI)2-s2.0-85067113703 (Scopus ID)9781728104928 (ISBN)
Conference
1st IEEE International Conference on Artificial Intelligence Testing, AITest 2019, 4 April 2019 through 9 April 2019
Note

Funding details: Fellowships Fund Incorporated; Funding details: AI; Funding details: Knut och Alice Wallenbergs Stiftelse; Funding text 1: ACKNOWLEDGMENTS This work was carried out within the SMILE II project financed by Vinnova, FFI, Fordonsstrategisk forskning och; Funding text 2: innovation under the grant number: 2017-03066, and partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by Knut and Alice Wal-lenberg Foundation.

Available from: 2019-07-03 Created: 2019-07-03 Last updated: 2019-07-03Bibliographically approved
Torstensson, M., Duran, B. & Englund, C. (2019). Using Recurrent Neural Networks for Action and Intention Recognition of Car Drivers. In: Proceedings of the 8th International Conference on Pattern Recognition Applications and Methods: . Paper presented at 8th International Conference on Pattern Recognition Applications and Methods (pp. 232-242).
Open this publication in new window or tab >>Using Recurrent Neural Networks for Action and Intention Recognition of Car Drivers
2019 (English)In: Proceedings of the 8th International Conference on Pattern Recognition Applications and Methods, 2019, p. 232-242Conference paper, Published paper (Refereed)
Abstract [en]

Traffic situations leading up to accidents have been shown to be greatly affected by human errors. To reduce

these errors, warning systems such as Driver Alert Control, Collision Warning and Lane Departure Warning

have been introduced. However, there is still room for improvement, both regarding the timing of when a

warning should be given as well as the time needed to detect a hazardous situation in advance. Two factors that

affect when a warning should be given are the environment and the actions of the driver. This study proposes

an artificial neural network-based approach consisting of a convolutional neural network and a recurrent neural

network with long short-term memory to detect and predict different actions of a driver inside a vehicle. The

network achieved an accuracy of 84% while predicting the actions of the driver in the next frame, and an

accuracy of 58% 20 frames ahead with a sampling rate of approximately 30 frames per second.

Keywords
CNN, RNN, Optical Flow
National Category
Natural Sciences
Identifiers
urn:nbn:se:ri:diva-39703 (URN)10.5220/0007682502320242 (DOI)
Conference
8th International Conference on Pattern Recognition Applications and Methods
Available from: 2019-08-08 Created: 2019-08-08 Last updated: 2019-08-08Bibliographically approved
Varytimidis, D., Alonso-Fernandez, F., Duran, B. & Englund, C. (2018). Action  and  intention  recognition  of  pedestrians  in  urban  traffic. In: : . Paper presented at Intl Conf on Signal Image Technology & Internet Based Systems, SITIS 2018.
Open this publication in new window or tab >>Action  and  intention  recognition  of  pedestrians  in  urban  traffic
2018 (English)Conference paper, Published paper (Other academic)
Abstract [en]

Action and intention recognition of pedestrians inurban  settings  are  challenging  problems  for  Advanced  DriverAssistance Systems as well as future autonomous vehicles tomaintain  smooth  and  safe  traffic.  This  work  investigates  anumber  of  feature  extraction  methods  in  combination  withseveral machine learning algorithms to build knowledge on howto automatically detect the action and intention of pedestriansin urban traffic. We focus on the motion and head orientationto predict whether the pedestrian is about to cross the street ornot. The work is based on the Joint Attention for AutonomousDriving   (JAAD)   dataset,   which   contains   346   videoclips   ofvarious traffic scenarios captured with cameras mounted in thewindshield  of  a  car.  An  accuracy  of  72%  for  head  orientationestimation  and  85%  for  motion  detection  is  obtained  in  our experiments.

National Category
Natural Sciences
Identifiers
urn:nbn:se:ri:diva-37631 (URN)
Conference
Intl Conf on Signal Image Technology & Internet Based Systems, SITIS 2018
Available from: 2019-01-29 Created: 2019-01-29 Last updated: 2019-01-29Bibliographically approved
Henriksson, J., Borg, M. & Englund, C. (2018). Automotive Safety and Machine Learning: Initial Results from a Study on How to Adapt the ISO 26262 Safety Standard. In: : . Paper presented at 1st Software Engineering for AI in Autonomous Systems (pp. 47-49).
Open this publication in new window or tab >>Automotive Safety and Machine Learning: Initial Results from a Study on How to Adapt the ISO 26262 Safety Standard
2018 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Machine learning (ML) applications generate a continuous stream of success stories from various domains. ML enables many novel applications, also in safety-critical contexts. However, the functional safety standards such as ISO 26262 did not evolve to cover ML. We conduct an exploratory study on which parts of ISO 26262 represent the most critical gaps between safety engineering and ML development. While this paper only reports the first steps toward a larger research endeavor, we report three adaptations that are critically needed to allow ISO 26262 compliant engineering, and related suggestions on how to evolve the standard.

National Category
Software Engineering
Identifiers
urn:nbn:se:ri:diva-34195 (URN)10.1145/3194085.3194090 (DOI)2-s2.0-85051137851 (Scopus ID)978-1-4503-5739-5 (ISBN)
Conference
1st Software Engineering for AI in Autonomous Systems
Available from: 2018-07-13 Created: 2018-07-13 Last updated: 2019-01-07Bibliographically approved
Ploeg, J., Semsar-Kazerooni, E., Morales Medina, A., de Jongh, J. F., van de Sluis, J., Voronov, A., . . . van de Wouw, N. (2018). Cooperative Automated Maneuvering at the 2016 Grand Cooperative Driving Challenge. IEEE transactions on intelligent transportation systems (Print), 19(4), 1213-1226
Open this publication in new window or tab >>Cooperative Automated Maneuvering at the 2016 Grand Cooperative Driving Challenge
Show others...
2018 (English)In: IEEE transactions on intelligent transportation systems (Print), ISSN 1524-9050, E-ISSN 1558-0016, Vol. 19, no 4, p. 1213-1226Article in journal (Refereed) Published
Abstract [en]

Cooperative adaptive cruise control and platooning are well- known applications in the field of cooperative automated driving. However, extension toward maneuvering is desired to accommodate common highway maneuvers, such as merging, and to enable urban applications. To this end, a layered control architecture is adopted. In this architecture, the tactical layer hosts the interaction protocols, describing the wireless information exchange to initiate the vehicle maneuvers, supported by a novel wireless message set, whereas the operational layer involves the vehicle controllers to realize the desired maneuvers. This hierarchical approach was the basis for the Grand Cooperative Driving Challenge (GCDC), which was held in May 2016 in The Netherlands. The GCDC provided the opportunity for participating teams to cooperatively execute a highway lane-reduction scenario and an urban intersection-crossing scenario. The GCDC was set up as a competition and, hence, also involving assessment of the teams' individual performance in a cooperative setting. As a result, the hierarchical architecture proved to be a viable approach, whereas the GCDC appeared to be an effective instrument to advance the field of cooperative automated driving.

Keywords
Merging, Road transportation, Protocols, Wireless communication, Automation, Trajectory, Safety, adaptive control, cooperative communication, mobile robots, motion control, multi-robot systems, protocols, road safety, road traffic control, road vehicles, traffic engineering computing, vehicular ad hoc networks, velocity control, Cooperative driving, interaction protocol, controller design, vehicle platoons, wireless communications
National Category
Natural Sciences
Identifiers
urn:nbn:se:ri:diva-33814 (URN)10.1109/TITS.2017.2765669 (DOI)2-s2.0-85035089916 (Scopus ID)
Available from: 2018-05-04 Created: 2018-05-04 Last updated: 2019-06-17Bibliographically approved
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-1043-8773

Search in DiVA

Show all publications
v. 2.35.8