separateurCreated with Sketch.

Will Big Data and artificial intelligence usher in Orwell’s vision of 1984?

whatsappfacebooktwitter-xemailnative
John Burger - published on 03/15/17
whatsappfacebooktwitter-xemailnative

Every click you make may be helping to forge chains in formerly free society, experts warnOnce, we were worried about how to get rid of mounting piles of rubbish. Or nuclear waste.

While those challenges are still with us, what’s grabbing more and more attention is the snowballing amount of data stored in the world’s computer systems and how it is going to affect the human race.

The amount of data we produce in our largely internet-based world doubles every year, say a number of scholars in an article in Scientific American, Will Democracy Survive Big Data and Artificial Intelligence? Last year, we produced as much data as in the entire history of humankind through 2015, they say. And with new innovations practically every day, the problem will only get worse:

Every minute we produce hundreds of thousands of Google searches and Facebook posts. These contain information that reveals how we think and feel. Soon, the things around us, possibly even our clothing, also will be connected with the Internet. It is estimated that in 10 years’ time there will be 150 billion networked measuring sensors, 20 times more than people on Earth. Then, the amount of data will double every 12 hours.

But it’s not a question of where to put all that data, as in the case of garbage or nuclear waste. It’s a question of what can be done with it and how our privacy and personal decisions will be impacted.

“Many companies are already trying to turn this Big Data into Big Money,” say the authors of the article, which originally appeared in the German journal Spektrum der Wissenschaft, Scientific American’s sister publication.

“Everything will become intelligent; soon we will not only have smart phones, but also smart homes, smart factories and smart cities,” say the authors, who include Dirk Helbing, professor of Computational Social Science at ETH Zürich. “Should we also expect these developments to result in smart nations and a smarter planet?”

But it’s not just data sitting there waiting to be put to use. Artificial intelligence, the authors say, is taking on a life of its own and is manipulating big data:

It is contributing to the automation of data analysis. Artificial intelligence is no longer programmed line by line, but is now capable of learning, thereby continuously developing itself. Recently, Google’s DeepMind algorithm taught itself how to win 49 Atari games. Algorithms can now recognize handwritten language and patterns almost as well as humans and even complete some tasks better than them. They are able to describe the contents of photos and videos. Today 70% of all financial transactions are performed by algorithms. News content is, in part, automatically generated. This all has radical economic consequences: in the coming 10 to 20 years around half of today’s jobs will be threatened by algorithms. 40% of today’s top 500 companies will have vanished in a decade.

It can be expected that supercomputers will soon surpass human capabilities in almost all areas—somewhere between 2020 and 2060. Experts are starting to ring alarm bells. Technology visionaries, such as Elon Musk from Tesla Motors, Bill Gates from Microsoft and Apple co-founder Steve Wozniak, are warning that super-intelligence is a serious danger for humanity, possibly even more dangerous than nuclear weapons. Is this alarmism?

One thing is clear: the way in which we organize the economy and society will change fundamentally. We are experiencing the largest transformation since the end of the Second World War; after the automation of production and the creation of self-driving cars the automation of society is next. With this, society is at a crossroads, which promises great opportunities, but also considerable risks. If we take the wrong decisions it could threaten our greatest historical achievements.

To echo the authors, does this seem alarmist? Well, there’s a real life example—in China.

Recently, Baidu, the Chinese equivalent of Google, invited the military to take part in the China Brain Project. It involves running so-called deep learning algorithms over the search engine data collected about its users. Beyond this, a kind of social control is also planned. According to recent reports, every Chinese citizen will receive a so-called ”Citizen Score,” which will determine under what conditions they may get loans, jobs, or travel visa to other countries. This kind of individual monitoring would include people’s Internet surfing and the behavior of their social contacts.

Granted, we don’t normally think of China as a free society anyway. But even in the liberal West, there are plenty of signs that Big Brother has become incarnated and is increasingly becoming manifest. Perhaps this is why George Orwell’s 1948 novel 1984 has recently enjoyed new popularity. The Scientific American article’s authors cite the fact that consumers are facing increasingly frequent credit checks and some online shops are experimenting with personalized prices.

“It is also increasingly clear that we are all in the focus of institutional surveillance,” they point out. “This was revealed in 2015 when details of the British secret service’s ‘Karma Police’ program became public, showing the comprehensive screening of everyone’s Internet use.”

But perhaps more of a threat to privacy is the ability of data gathering and algorithms to suggest to the consumer what he wants, fooling him that he is making decisions on his own, when in reality he is being controlled by Big Data.

It’s not only business that is interested in using the technology. It’s government as well, “nudging” citizens to act in certain ways. “The new, caring government is not only interested in what we do, but also wants to make sure that we do the things that it considers to be right. The magic phrase is ‘big nudging,’ which is the combination of big data with nudging.”

Society is at a crossroads, the authors warn, and there is risk of “extensive damage” if technologies are not compatible with our core values. “They could lead to an automated society with totalitarian features. In the worst case, a centralized artificial intelligence would control what we know, what we think and how we act.” Thus, the authors put forth a number of fundamental principles to follow, including decentralizing the function of information systems; improving transparency; enabling user-controlled information filters, and promoting responsible behavior of citizens in the digital world through digital literacy and enlightenment.

Did you enjoy this article? Would you like to read more like this?

Get Aleteia delivered to your inbox. It’s free!

Tags:
Enjoying your time on Aleteia?

Articles like these are sponsored free for every Catholic through the support of generous readers just like you.

Help us continue to bring the Gospel to people everywhere through uplifting Catholic news, stories, spirituality, and more.

banner image
Top 10
See More
Newsletter
Did you enjoy this article? Would you like to read more like this?

Get Aleteia delivered to your inbox. It’s free!