Article Image

IPFS News Link • Science, Medicine and Technology

Algorithms Can Lie And Deceive But Can They Be Stopped?

• https://www.technocracy.news

Algorithms can dictate whether you get a mortgage or how much you pay for insurance. But sometimes they're wrong – and sometimes they are designed to deceive.

Lots of algorithms go bad unintentionally. Some of them, however, are made to be criminal. Algorithms are formal rules, usually written in computer code, that make predictions on future events based on historical patterns. To train an algorithm you need to provide historical data as well as a definition of success.

We've seen finance get taken over by algorithms in the past few decades. Trading algorithms use historical data to predict movements in the market. Success for that algorithm is a predictable market move, and the algorithm is vigilant for patterns that have historically happened just before that move. Financial risk models also use historical market changes to predict cataclysmic events in a more global sense, so not for an individual stock but rather for an entire market. The risk model for mortgage-backed securities was famously bad – intentionally so – and the trust in those models can be blamed for much of the scale and subsequent damage wrought by the 2008 financial crisis.

Since 2008, we've heard less from algorithms in finance, and much more from big data algorithms. The target of this new generation of algorithms has been shifted from abstract markets to individuals. But the underlying functionality is the same: collect historical data about people, profiling their behaviour online, location, or answers to questionnaires, and use that massive dataset to predict their future purchases, voting behaviour, or work ethic.

The recent proliferation in big data models has gone largely unnoticed by the average person, but it's safe to say that most important moments where people interact with large bureaucratic systems now involve an algorithm in the form of a scoring system. Getting into college, getting a job, being assessed as a worker, getting a credit card or insurance, voting, and even policing are in many cases done algorithmically. Moreover, the technology introduced into these systematic decisions is largely opaque, even to their creators, and has so far largely escaped meaningful regulation, even when it fails. That makes the question of which of these algorithms are working on our behalf even more important and urgent.


musicandsky.com/ref/240/