What is socially compliant driving in AVs?

The idea of navigating through driving style is also something need to be considered while implementing new range of AVs.

Socially Compliant Driving- It is defined as a predictable behavior where other human and autonomous agents are driving social dilemmas where socially compliant driving is achievable during the AVs as being the fundamentals in the passenger’s safety and surrounding vehicles as the behaviors can be predictable to enable humans so that the AV’s actions can be understood and can be responded appropriately. In order to achieve such driving capability, it is needed that the autonomous system is behaving like a normal human being where an intrinsic understanding is achieved in human behavior and also in social expectations of the group. One idea implemented in using human behavior is by imitating the human policies where they can learn from data (collected through observations of past human behavior and hence, capable of predicting and mimicking the human trajectories) through imitation learning or even the human reward functions can also be learned where social compliance is enabled so that optimal policy can be controlled for best-response in the game.

In my opinion, the classification of such elements in certain groups, helps the future generation to come out with more solution and use more such observed patterns to come out with as detailed and perfect mathematical models, that can be implemented for AVs to be socially acceptable and be able to navigate through minor gaps and turns in heavy traffic. The greater the number of observations is recorded; the greater number of average road paths can be scaled up and fit into the trajectory memories of AVs.

Source:- S. Ross, G. Gordon, D. Bagnell, “A reduction of imitation learning and structured prediction to no-regret online learning” in Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, G. Gordon, D. Dunson, M. Dud´ık, Eds. (Proceedings of Machine Learning Research, Fort Lauderdale, FL, 2011), vol. 15, pp. 627–635;

J. Ho, S. Ermon, Generative adversarial imitation learning in Advances in Neural Information Processing Systems 29, D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, R. Garnett, Eds. (Neural Information Processing Systems Foundation, 2016), pp. 4565–4573;

B. D. Ziebart, A. L. Maas, J. A. Bagnell, A. K. Dey, “Maximum entropy inverse reinforcement learning” in Proceedings of the 23rd AAAI Conference on Artificial Intelligence, A. Cohn, Ed. (Association for the Advancement of Artificial Intelligence, Palo Alto, CA, 2008), vol. 8, pp. 1433–1438;

H. Kretzschmar, M. Spies, C. Sprunk, W. Burgard, Socially compliant mobile robot navigation via inverse reinforcement learning. Int. J. Robot. Res. 35, 1289–1307 (2016);

D. Sadigh, S. Sastry, S. A. Seshia, A. D. Dragan, “Planning for autonomous cars that leverage effects on human actions” in Proceedings of Robotics: Science and Systems, D. Hsu, N. Amato, S. Berman, S. Jacobs, Eds. http://www.roboticsproceedings.org/rss12/p29.html. Accessed 15 November 2019.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.