Even though Big data intelligence is coming out of its age with benefits in playing major roles in other sectors of the science field, but do we need to have concern over privacy, bias, inequality, safety, and security.
As per Cave & OhEigeartaigh,2018 and Dietterich, 2015; the future development of big data intelligence may pose a threat in terms of long-term safety and security risks. Van Wynsberghe & Robbins, 2018, and Feldstein,2019; it has been proven worldwide with facts and evidence that big data intelligence is causing a dilemma between the citizen and state relationship, resulting in an acceleration of authoritarianism.
A study done by Shahbaz & Phukan, 2018, shows that generating fake clips with artificial intelligence is creating more threats than even the daily fake news. As we come across in our day to day news, about the political scenario and how social media is playing a crucial role in an election campaign where you can create a close connection with your audience and voters, fake videos or clips is equivalent to any virus which spreads out easily to act as a cancer to create an incurable impression or images for a moment. It has the biggest potential to play with sensitive topics and create havoc among the people to target for its own need.
According to Goodfellow et al., 2014, GANs knowns as Generative adversarial networks can create concocted images and videos which are completely realistic. They usually come with two dueling neural networks. Even it can generate faces of photo-realistic of any race, gender, or age as per the customer requirement. It’s definitely a cutting-edge technology in the sector of innovation but definitely creates shivers with its capability in potential threats.
The applications do create social anxiety and a negative attitude by many non-technology groups or people who are not so keen on AI. But don’t you think how far technology has come and beyond what is possible. The AI shows completely what the human brain can do and how it can make the impossible things into a possible well-crafted tool that can be used with a paid subscription.
Should we stop innovation thinking about the threats? Should we limit our self but counteracting the potential threats? Or should we focus more on the measures and methods to stop getting our technology being misused or become a viral threat in someone’s life?