Even though AI is seen as a boon to humankind but there are many pertaining questions, one of them is the concern towards ethical issues. What will happen if machine learning fails to revert the discriminatory decision made by an AI?
Artificial Intelligence is a financial service provider from long “before” than even the chatbots. But the “wrong address” or wrong entry of “Schufa” can make insurance or credit refuse it for a long time.
In a Smart Factory, if an autonomous and AI-based robot is faced with two bad alternatives where a human colleague is faced with injury in both cases.
A panel of EU experts has recently addressed where ethical guidelines are presented for the discussion at the end of 2018 that is published by April in its final version. By the end of June 2019, a recommendation has been drafted by the panel for the action of the European Union.
However, as long as it can be seen, the guidelines don’t go so far enough. It has been described as “to obtain the maximum benefit from the AI while taking the least possible risks. To ensure that we stay on the right track, we need a people-centered approach to AI.”
A checklist has been developed by the panel for the AI vendor’s questions on how the systems are operating safely. The EU Commission got no recommendations on what minimum requirements and security requirements should be imposed on AI operators so that it can be ensured that people remain in the real focus of the digital transformation as creators and beneficiaries.
As one thing is very clear: An AI that is successful economically or from the point of view of authorities will not change operators just because it acts as a little discriminatory.