Days later my ears are still ringing from the booming baritone of the public-address announcer in the keynote session on the first morning at the IBM Think 2018 conference in Las Vegas. I keep wondering when the humming will subside, but with it a single thought continues to linger from the presentation that was delivered by IBM’s CEO, Ginni Rometty. That was the discussion point of the ethics of artificial intelligence or AI.
Now this is not to say AI in the sense of the cute emoji that you may have made of your face on your mobile device. No, this is talking about AI that is managing the ebb and flow of the global supply chain or managing the routing of bags for your rebooked flights, as an example. Not some garden variety cruft.
Artificial intelligence, or AI, is a system that demonstrates traits that can mirror human intelligence in some form or another. These traits can be associated with problem solving, manipulation of informational inputs, or in some cases can even mimic creativity. Obviously, your mileage may vary.
AI is a type of intelligence that often evokes the specter of the movie Terminator. While this draconian post-apocalyptic vision of the future is what most think of when the subject of AI comes up, it really does do us a disservice. Case in point is Watson from IBM. This is an AI platform that can be trained to, as an example, ingest a request for a proposal document (RFP) and respond. Rather cool when you think of it.
Now, in my days as a defender, I would have loved to have something like Watson as a SaaS offer that could do the inverse. Whereby I mean, it would read an RFP response from a vendor and for every time the wrong company was referred to in the document it would send 30 pizzas with anchovies to the CEO of that vendor’s house. While I’m being facetious, it was really amazing to see how often that would happen in my past roles.
RFP’s are just one example. With all of that data processing there inevitably comes the question of how the data is going to be handled and secured. Data stewardship and the accountability for data is of paramount importance to do business today. Unfortunately, for many enterprises that have built up over the years, this has not always been the case. Now, with the imminent arrival of the EU General Data Protection Regulation (GDPR we find many organizations are playing beat the clock to get their systems up to snuff before May 25, 2018 when the legislation goes into effect.
Then when we hold that preparation up against the cold light of the Cambridge Analytica story that broke last week it’s not difficult to see the need to take data stewardship seriously. The big data firm is alleged to have harvested the information for roughly 50 million Facebook users without their consent and, “failing to delete it when told to.”
The days of the castle gate and moat as defining characteristics of the security of an organization are stale and dated to say the least. Now, understanding where the data is within an organization, and how it is handled has become the focal point…or at least should be.
If we are the stewards of a data set, we should ask pertinent questions when dealing with that data. Do you just blindly feed it into an AI system? No, of course not. One needs to ask the questions to ensure that the data is not going to be used in a fashion beyond what was intended. The net effect of a breach of said data could have a significant impact from a legal perspective as well as negatively impacting the fiduciary responsibility a publicly traded company has to shareholders.
The gold rush of collecting personal data has built a large security debt over time. The bill has now come due. The real cost does not lay in the potential fines that will be applicable under GDPR or any other legislation, but the price tag associated with getting a company to a safe place as it pertains to data security and privacy.
Where are the risks in your enterprise? What types of access do your internal folks have to data let alone the external ones. *Cough* S3 buckets *cough*. What sort of visibility do you have into your data processors and data controllers? Are you tracking where your data is being processed?
Now, imagine that with regard to AI. How is that data being used and managed? To say nothing of dealing with the subject of bias. Just because we can do something does not necessarily mean that we should. There needs to be a far more cogent discussion as to the ethics of AI and how data security and privacy will be addressed.
This article is published as part of the IDG Contributor Network. Want to Join?