For a few different reasons, the idea of Artificial Intelligence (as well as Automation) scares many people.  Many people feel that it will put many people, especially those in the lower half of the income scale, out of work ... others are worried about AI may continue (or worsen) many biases and prejudices that are in society today ... and, others are fearful of machines taking over (like in the Terminator films).

 

So, what can we do to establish rules and guidelines to make sure that AI does not do those things and it truly helps society?  At a recent summit (the New Work Summit, hosted by the New York Times), attendees put together a list of 10 things that can help to ensure that AI is deployed ethically and fairly.  I have added some comment to each:

 

Transparency – Companies should be transparent about the design, intention and use of A.I.

There was an interesting point brought up during a recent episode of “Last Night Tonight with John Oliver”, where they showed some interviews with telephone operators about 35 years ago.  They were all concerned about how automated operators were replacing them, and their bosses were not saying anything.  Many AI solutions will be used to replace workers, but many will be used to enhance their work ... companies need to explain why they are doing this and how it will change things early on in the process.

 

Disclosure – Companies should clearly disclose to users what data is being collected and how it is being used

The big thing here is that it has to be clearly stated.  Putting it on page 125 of a boring legal document that everyone agrees to is not sufficient.

 

Privacy – Users should be able to easily opt out of data collection

As I have always stated, a free service is not free.  If it is a free service, the user must accept that the company has costs to keep the service running, so the collection/re-sale of data is likely.  As well, many services, like Uber, do not make it obvious as to the extent that they are collecting, often well beyond when someone is using the service.  This needs to be clearer.

 

Diversity – A.I. should be developed by inherently diverse teams

This is to ensure that systems are fair and accurate.  This includes using a more diverse base when it comes to feeding in images for facial recognition, as an example.

 

Bias – Companies should strive to avoid bias in A.I. by drawing on diverse data sets

Bias may be intention or unintentional.  Someone may have an obvious bias towards a certain group, but all of us may be more bias when on an empty stomach, as an example.  The appeal to A.I. is supposed to be its lack of fluctuation in judgement, based on experience and blood sugar level.

 

Trust – Organizations should have internal processes to regulate the misuses of A.I. (such as a Chief Ethics officer and/or an Ethics board)

Not much to say here ... I think they also need to gather input from their customers and employees too.

 

Accountability – There should be a common set of standards by which companies are held accountable for the use and impact of their A.I. technology

This one is tough.  Something that may have been developed for good can often be used for bad, such as many chemicals being turned into a weapon or a mathematical equation for finance being used to develop a bomb.  How much responsibility does the original inventor have?  However, if a company designs a product for a particular use, such as for war, they need to be open with their employees about such.

 

Collective governance – Companies should work together to self-regulate the industry.

The bigger issue here may be governments working together.  Having many governments set up rules for compensating displaced workers, ethic concerns and more is great, but if all countries do not agree, this puts many at a disadvantage and encourages outsourcing to less stringent countries.

 

Regulation – Companies should work with regulators to develop appropriate laws to govern the use of A.I.

Similar to the last point, in the world of global commerce, we need countries to work together for a common set of goals and rules.

 

Complementarity – Treat A.I. as tools for humans to use, not a replacement for human work

In the same episode of “Last Night Tonight”, they used the examples of bank tellers.  One common misconception is automated banking machines have led to the demise of the bank teller position.  In fact, in the first 20 years or so after bank machines have been introduced, the number of positions actually went up.  The reason is that the role of a teller was changed to more sales focused, as opposed to dispensing as much cash.  I think many companies will follow this model, but some will not and will just use it to cut jobs.