In Europe, one of the dilemmas about making an AI trustworthy among the citizens is going through a list of special requirements and agendas. The considerations cover ethical, legal, and societal issues with socio-economic challenges.
A panel of 52 selected representatives from science, civil society, and industry have been selected for further development.
The major key requirements for AI trustworthy with the final guidelines are selected for a human-centered approach: –
- Decision and control to be human freedom.
- Technical robustness and security.
- Data protection and governance.
- Diversity, non-discrimination, and fairness
- Environmental and social well-being
The goal is to support the AI developers and users to be not confronted with problems or risks but to help in putting the principle in the right manner so that it can be a guidebook for AI systems evaluations.
The factors indicated a healthy boundary for the AI developers and as well to the well-established society. However, it shows that there are trust issues in Europe concerning AI.
As the capacity of AI is beyond anybody’s imagination and the problem of data mishandling or being hacked by any other devices.
Or being controlled by them completely is something turned into fears and obstacles for European citizens to handle. Hopefully, the requirements can be used as a trademark and a guided path for other countries also.
Source:- Industry 4.0