Another AI security company creates an ethics policy for code development and use

Physical and digital security vendor Prosegur Security says it is getting serious about responsible AI, creating a responsible AI policy and hiring a chief ethics officer.
Still in the works, the firm’s responsible AI policy in an effort to get the most out of the algorithms while making sure people’s safety is the top priority. The policy will set out security, ethical, moral and regulatory values.
The company says all parts of the global company will have to accept and implement the policy as will all business partners. Prosegur’s offerings include computer vision and video surveillance systems.
All AI development at Prosegur will have to protect and preserve the rights and freedoms of anyone. Corporate or local AI governance boards will help keep the policy in the forefront of staff and developers.
Part of policy is assuring that people can act on and oversee code operation, something that will be managed by the yet-to-be-hired chief ethics officer.
Microsoft in June announced that they had created their own responsible AI framework.
As a direct result of this work, Microsoft executives said they would retire its Azure face recognition/analysis software designed to identify age, gender, emotional states and other qualities. They cited concerns about bias and inaccuracy.
Global consulting firm Accenture a year ago wrote about how businesses had to move beyond just talking about the virtues of trust in AI. Responsible AI frameworks can help motivated companies to make sure people and firms are safe in the presence of AI.
There are four pillars of Responsible AI, spelled out here, according to Accenture: organizational, operational, technical and reputational. That final point calls for creating a clear AI mission that espouses company values and ethical guardrails.
Article Topics
AI | best practices | biometrics | computer vision | ethics | Prosegur Security | responsible AI | video surveillance
Comments