The above is to offset a little of the doom-and-gloom that might follow. Also, it fits pretty well.I recently read Weapons of Math Destruction by Cathy O’Neil. It covers some of the concerns I’ve mentioned previously.
It’s basically an algorithm that utilizes Big Data at scale to cause harm, whether on purpose or by accident.
This includes things like teacher scoring based on test results (shout out to the cheaters!) and Facebook’s ad targeting/news feed capabilities to allow fraud or manipulation (hey there President Orangeface).
WMDs have 3 characteristics, they’re: opaque, unregulated, and uncontestable
In other words, they’re black boxes. Like AI. ⬛
No one really knows what the algorithms are doing and there aren’t any feedback mechanisms to allow them to learn.
The Atlantic recently published an article about The Coming Software Apocalypse that touches on a similar problem.
“The programmer, staring at a page of text, was abstracted from whatever it was they were actually making.”
Basically, code is so complex now that people really don’t know what it does anymore. <🤔>
Full disclosure: I could not finish this article, it just kept going.
I thought WMD raised some good points and hopefully sparks conversation on a topic that needs some attention, but ultimately felt (like a lot of business/business-adjacent books) that it could have been a lot shorter. A lot of good examples, but I don’t think they were all needed to make the point.
My takeaway on all these: models aren’t the whole picture and are only as good as the person/people creating them. Computers aren’t biased but humans are, and we can hardcode those biases into computers.
They call it BIG data for a reason.
Again, computers don’t understand concepts. They understand data and the underlying patterns. The algorithms don’t know when they’ve made a mistake by ethical standards so they’ll never tell you your zip code targeting might be racist or facial recognition might be sexist/bigoted.
I can image this, because I am pretty sure this is how a lot of digital advertising works. This is how that Facebook anti-Semitic targeting snafu happened. The machines looked at data, found some patterns, and. wham, let’s target some Nazis.
Enough with the fear mongering, it all comes back to us humans (as terrifying as that might be). The computers, algorithms, models, robots are Switzerland (for now… I, for one, welcome out robot overlords), they just tell us what the data says based on the parameters we feed them.
We can’t just say “the model said so” and be absolved of responsibility for anything bad that happens. Ultimately, these things are only as good as us.