Security Robots Are Invading Your Privacy! Or Are They???

May 5, 2020
Share
emailFacebookLinkedinTwitter

Every day, we use digital devices and web services to shop, track our fitness, chat with friends, play games, check-in at stores and restaurants, and many other services like, in this case, the use of security robots to protect our physical safety or our property. While the use of technology is becoming increasingly essential in our digital society, people worry about how their privacy is being affected.  We get this question a lot, so let’s jump in with both feet here to examine whether or not these “trusty” security robots are invading one’s privacy.

To ensure we are on the same page, here are a couple of definitions of privacy straight from dictionary.com:

a.     “Privacy” is the state of being apart from other people or concealed from their view; solitude; seclusion

b.     “Privacy” is the state of being free from unwanted or undue intrusion or disturbance in one’s private life or affairs

There is some common ground shared in these two definitions, but it is important to point out that in both scenarios the assumption is that a person’s private life is being unwantedly broadcasted to the public.  This is not what security robots do.

When a security robot is hired to protect a site, there is no personally identifiable information (PII) that is exchanged during the monitoring service.  According to  investopedia.com, personally identifiable information (PII) is information that, when used alone or with other relevant data, can identify an individual.  PII may contain direct identifiers (e.g., passport information) that can identify a person uniquely, or quasi-identifiers (e.g., race) that can be combined with other quasi-identifiers (e.g., date of birth) to successfully recognize an individual.  The only information that security robot end users get is an image of a person, which they typically already have in the form of an access badge for employees and visitors.  Nothing new here.  Additionally, the only information that the manufacturer gets from its customers is the email address of the specific security team members utilizing the technology so they can get access to the user interface.  No PII here either.

Having a sense of privacy when you are outside of your home is a false proposition in todays’ technology world and will be increasingly more so in the very near future.  What many people either do not know or are not aware of is that when you go to the grocery store, the bank, the jewelry store, a hospital or to a casino, for that matter, these locations already have fixed cameras that take your picture or capture you in security footage 24/7/365.  The average person gets their picture taken about 75 times per day in the USA and a whopping 300 times per day in London.  Foiled yet again.

The primary method that bad actors use in order access people’s personal information, which is typically stored in company systems, is via hacking or social engineering techniques.  Even though no PII data is stored on security robot servers, the Company hires white hat hackers under contract who try to break into the systems on a daily basis.  These are the same hackers that companies like Twitter, Starbucks and General Motors use.  The researchers (they prefer to be called that instead of hackers) find a vulnerability and the Company pays them a bounty and fixes it.  Using ethical hackers has many advantages – like a 115% increase in ROI, a 66% reduction of internal efforts and a 50% decrease in test reductions for those companies that use them.  There are many ethical hacking companies, but an example of one such company is HackerOne. This is yet another way in which security robot systems secure data so that one can be sure that PII is safe.  Scratching head… it’s not looking good here.

Surely having constant video of a person captured without their knowledge or consent is a violation, though, right?  Sorry, but no.  The United States Supreme Court adopted the two-prong test established in the landmark case of Katz v. United States in 1967 to protect your Fourth Amendment rights.  Essentially, one does not have an expectation of privacy in a public place and, therefore, video does not contravene an individual’s actual, subjective expectation of privacy.

Surveillance systems are intended to monitor for illicit activities and potential threats.  Only videos of incidents and anomalies are reported for review, while the remaining video is stored for forensic investigations.  How long that video is stored depends on accepted industry best practices, end user policies and any special considerations resulting from a custom planned robotic deployment.  Stored video may be saved following a strict chain of evidence requirement so that it might be used by the police or courts to investigate and prosecute a crime.  Otherwise, the video gets deleted at the prescribed intervals and removed from the system.

So, no intrusion or broadcasting of one’s private affairs; no exposure to one’s PII; no form of surveillance to which one has not already been exposed; no cyber access to PII; and no expectation of privacy in public that may violate one’s Fourth Amendment rights.  It seems as though the privacy issue has already been put to rest before the security robots even showed up.

At Knightscope, privacy is a top priority at all times and we strive to continue the protections afforded us all by the previously laid groundwork.  As always, if you have any questions about this topic or any other topic related to our products, feel free to click on the chat icon at the bottom right of this page to talk directly with our client development team about your needs.

Request Client Demo

Stay Informed
By submitting, you agree to receive e-mail from Knightscope*