Home » AI » Garbage in, Garbage Out (or Worse)

Garbage in, Garbage Out (or Worse)

Source | 2017-08-29T11:40:34+00:00 Aug 22nd, 2017|Categories: Post|Tags: , , , |0 Comments

Deep learning systems rely, in part, on training data sets. A new paper explores BadNets, backdoored neural networks, to show how outsourced training can introduce new security risks (e.g., a street sign classifier classifies a stop sign as a speed limit sign). Beyond the potential need for new contractual reps, warranties, and indemnification provisions, the research highlights a few things:.

1) Know Your Data: If you have bad data, your output will follow suit. Make sure you have good data before embarking on your next AI project.  Additionally, the power of data analytics increases as companies blend internal and external data sets. Although the integration of risky external data is not likely an issue in the near term for the legal industry, companies and institutions should understand the source of external data the rely upon. Today, companies focus on the source of the money. Moving forward, companies should also understand the source of their data.

2) Know Your Provider: As the research paper points out, the task of system training is often outsourced to the cloud (“machine learning as a service” (MLaaS)). With the likely continued reliance on such services, companies should understand their external providers and/or the providers they rely upon.

3) Know What Questions to Ask: Someone on your team should have a good grasp of how machine learning, systems, processes, and data work together. You do not want your clients surprised if their new, intelligent system adds hidden security concerns to the company’s already full security portfolio.

Read the excerpt and BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain paper.

Leave A Comment