We are excited to share that we are set to begin a new chapter with Dropbox, Inc. Dropbox is acquiring our IP technology to embed natively into the Dropbox product, bringing end-to-end, zero-knowledge encryption to millions of business customers around the world. Check out our blog to find out more!

Transparent Citizen: Minority Report (Film Analysis)

Moritz Ober | Cyber Security Writer

Transparent Citizen: Fiction Versus Reality

In the first part of our blog series “Transparent Citizen: Fiction Versus Reality” we focused on silent, but direct methods of surveillance, taking Orwell’s “1984” as an example. This includes all cases, where user data is collected at the source, without further filtering of information. For the second part, we now put a spotlight on machine-based prediction of future events.

Part 2: Captain America and the Minority Report

To start with, we present you with a related, but yet much more far-ranging issue: methods of preventing crime before it may actually happen. This sounds like pure science-fiction in the first place and major studios often present it exactly that way. But at a closer look, we can find out that so-called “predictive policing” isn’t that far away in reality anymore..
In this context, Stephen Spielberg’s “Minority Report” (2002) is a frequently quoted feature film, being a rather fictitious – but nevertheless thrilling – example of predictive policing. The movies focuses on the prevention of future crimes by predicting (or better: pre-knowing) events which not yet happened. This method is called “Precognition”. The plot is based on a short story by Philip K. Dick which actually had been released already in 1956, only eight years after the previously mentioned “1984”. The system of “Minority Report” is able to arrest potential criminals before their villainous actions (preferably murder or robbery) – even if these individuals yet unaware about the crime they would commit in the future.

Predictive Policing: The Future of Police Work?

Closer to reality, but still not too far from the story of “Minority Report“, is (surprisingly) the second movie about Marvel superhero Captain America (Captain America: The Winter Soldier). The idea of predictive policing is taken a step further: not only has “Cap” to deal with questionable motives and methods of the S.H.I.E.L.D. agency. As the movie goes on, he faces a new and very drastic world of modern crime prevention: project “Insight”.
Accumulated data of millions gets analyzed and scanned for suspicious patterns by Insight. Potential terrorist’s profiles (computed by likelihoods) then are passed on to hovering and heavily armed platforms, the so-called helicarriers. These then hunt down the targets and eliminate “threats” – before they can evolve.
This may be the right place to give you the all-clear: at the moment no one has to fear an automated out-of-the-sky execution just because of his browser history. Captain America’s helicarriers still exist only on the cinema screen.
Nevertheless, predictive policing, as part of the desire to be one step ahead of crime, has become reality. Especially in the U.S. analyzing software is used to boost available police force’s efficacy. Also in some states of Germany, including Bavaria, first approaches have been made to incorporate evaluation-based crime prevention.
There is, however, legitimate doubt about the success: not only are many authorities lacking the resources to adequately penetrate the collected data. The proper execution of actions (such as e.g. increasing patrols in hotspot areas) is often impractical, too. Additionally, even within some agencies there are serious concerns about the model’s success – the collection of data continues nevertheless. More relevance to algorithm-based crime fighting seems to be given by intelligence services, e.g. in the efforts to prevent large-scale terror attacks.

Everyday Algorithms

Still, for most people automated profiling is part of their everyday life. Changing your status on Facebook, ordering a new coffee maker via Amazon or just browsing destinations for your next vacation: You can be certain that all data is saved and processed by some means.
This processing happens fully computer-based most of the time. On the one hand this is supposed to improve your use of the respective service to maximize comfort. On the other hand, your return to the service is supposed to be even more comfortable the next time. For this reason, computers are scrutinizing your user behavior, adjust search results and advertisements to your profile as close as they may.
These routines wouldn’t be a problem – if they just stayed that way. As a matter of fact, users give their preferences voluntarily and for the purpose of getting a better user experience. Companies on the other side may target their advertisements even more precisely. Theoretically, this exchange – data for service – would constitute a win-win situation.
However, there are two conditions in which user information processing becomes a serious problem. In both cases, is the concentrated analysis and relating of data collected from otherwise separate sources the first step, which alone carries high risks of data misuse. Worst case scenarios to follow are, first, the disposal of such data bundles to the private sector or, second, access of that compact information by governmental authorities – with the purpose of surveillance.

If Algorithms Analyze, Who Decides?

Answering this question shows one big difference between the dystopian movies and reality: At present humans still have to make the final decision how to proceed with the results of a data analysis. This is essential for law enforcement: the decision if and how a potential crime may or should be prevented is in the end up to a human officer/judge/etc. And despite this decision being potentially false, human empathy serves as a safeguard for machine-made miscalculations in many cases. It becomes more and more crucial to market research and commercial analysis to incorporate automated processing. Problems may arise if both worlds collide. Concerning headlines recently came from China, when the country implemented first trials of “social scoring”. This system’s approach is exactly what we are really afraid of: humans being assessed by algorithms. And that is terrifyingly close to the Orwellian surveillance society.

Share this article

Related Articles


Our New Chapter with Dropbox: What Boxcryptor Users Need to Know

Last week we already announced that we sold important technology assets to Dropbox. What our customers need to know now, we explain in detail here.


A letter from our Founders: We’re joining Dropbox!

Almost 12 years ago, we set out to make complex security solutions easy to use. Now we are excited to share that we are set to begin a new chapter with Dropbox, Inc.

Dummies Book Cover and Back

CLOSED We Celebrate Our Book Release: Your Chance to Win

We have published our first book to get even more people excited about the cloud and data security. Celebrating the official launch, you can win printes copies and Boxcryptor licenses in our raffle. Read about the details in our blog post.