Mene mene tekel upharsin.
Thou art weighed in the balances, and art found wanting.
It seems each time I post on the IT blog, I have more to say about the article I’m pointing to, and have to do my own post, to explore the idea more deeply. This time, it’s the question of Pre-Crime.
Ever see the movie Minority Report? The idea was that pattern recognition, when compiled from a sufficient number of sources and properly cross-referenced, could be used to predict, with some degree of accuracy, whether or not someone would commit a crime. In the film, the complication was that pattern recognition could never be completely accurate–there was always a quantum of uncertainty, the question of whether or not the person would actually take that final step, cross the line and commit the act. In the film this is part of what ends up undermining the main protagonist’s deep-seated faith in the system.
According to this article in Datamation, it looks as though something similar might be making an appearance in the here-and-now thanks to data-mining companies that harness massive amounts of processing power in order to comb through Facebook, Twitter, YouTube and blog posts, among many other public sources, in order to create a composite assessment of an individual’s personality that they claim allows them to spot patterns and create a character assessment that you would hope has a high degree of accuracy. Categories like “poor judgement” and “demonstrating potentially violent behaviour” are among those used in the report that is generated. The rationale is that this is meant to protect the company from liability, loose canons, people whose behaviour might end up casting the company in a bad light.
But surely there’s always that quantum of uncertainty–and the possibility that the behaviour might never manifest? Does the fact that the information is publicly available make it okay? Could an argument be made that someone is hoist with their own petard, if their public web presence brings up red flags?
The questions raised in my Financial Crimes class add further nuances to this issue. By my understanding, under C-45, if it is found that a “senior officer” at a corporation has some awareness of illicit activity, then if he does not take appropriate steps, the corporation itself could be charged with criminal activity. So, the hypothetical that immediately springs to mind is the very angle that the data-mining companies are using to sell their product–the potential for liability. If a company has access to this kind of assessment service, then is there a positive duty that would require them to make use of it and take steps, as part of their risk management or due diligence strategy? If they declined to do so, then could they be liable, or assessed as having been willfully blind?