Confidentiality and Privacy Controls: Andreas Rudolf Anton
Confidentiality and Privacy Controls: Andreas Rudolf Anton
Confidentiality and Privacy Controls: Andreas Rudolf Anton
Organizations possess a myriad of sensitive information, including strategic plans, trade secrets,
cost information, legal documents, and process improvements. This intellectual property often is crucial
to the organization’s long-run competitive advantage and success. Consequently, preserving the
confidentiality of the organization’s intellectual property, and similar information shared with it by its
business partners, has long been recognized as a basic objective of information security.
Identify and
Classify
Information
Preservation of
Access
Controls
After the information that needs to be protected has been identified, the next step is to classify
the information in terms of its value to the organization. Control Objectives for Information and Related
Technology (COBIT) 5 management practice points out that classification is the responsibility of
information owners, not information security professionals, because only the former understand how
the information is used. Once the information has been classified, the appropriate set of controls can be
deployed to protect it.
Training
Training is arguably the most important control for protecting confidentiality. Employees need to
know what information they can share with outsiders and what information needs to be protected. For
example, employees often do not realize the importance of information they possess, such as time-
saving steps or undocumented features they have discovered when using a particular software program.
Therefore, it is important for management to inform employees who will attend external training
courses, trade shows, or conferences whether they can discuss such information or whether it should be
protected because it provides the company a cost savings or quality improvement advantage over its
competitors.
Privacy
The Trust Services Framework privacy principle is closely related to the confidentiality principle,
differing primarily in that it focuses on protecting personal information about customers, employees,
suppliers or business partners rather than organizational data. Consequently, the controls that need to
be implemented to protect privacy are the same ones used to protect confidentiality: identification of
the information that needs to be protected, encryption, access controls, and training.
Privacy Controls
As is the case for confidential information, the first step to protect the privacy of personal
information collected from customers, employees, suppliers and business partners is to identify what
information the organization possesses, where it is stored, and who has access to it. It is then important
to implement controls to protect that information because incidents involving the unauthorized
disclosure of personal information, whether intentional or accidental, can be costly.
Privacy Concerns
Two major privacy-related concerns are spam and identity theft. Spam Spam is unsolicited e-mail
that contains either advertising or offensive content. Spam is a privacy-related issue because recipients
are often targeted as a result of unauthorized access to e-mail address lists and databases containing
personal information. The volume of spam is overwhelming many e-mail systems. Spam not only reduces
the efficiency benefits of e-mail but also is a source of many viruses, worms, spyware programs, and
other types of malware. To deal with this problem, the U.S. Congress passed the Controlling the Assault
of Non-Solicited Pornography and Marketing (CAN-SPAM) Act in 2003. Thus, organizations need to be
sure to follow CAN-SPAM’s guidelines or risk sanctions. Key provisions include the following:
The sender’s identity must be clearly displayed in the header of the message.
The subject field in the header must clearly identify the message as an advertisement or
solicitation.
The body of the message must provide recipients with a working link that can be used to opt out
of future e-mail. After receiving an opt-out request, organizations have 10 days to implement
steps to ensure they do not send any additional unsolicited e-mail to that address. This means
that organizations need to assign someone the responsibility for processing opt-out requests.
The body of the message must include the sender’s valid postal address. Although not required,
best practice would be to also include full street address, telephone, and fax numbers.
Organizations should not send commercial e-mail to randomly generated addresses, nor should
they set up websites designed to “harvest” e-mail addresses of potential customers. Experts
recommend that organizations redesign their own websites to include a visible means for visitors
to opt in to receive e-mail, such as checking a box.
Summary
Confidential information about business plans and personal information collected from
customers was encrypted both in storage and whenever it was transmitted over the Internet. Employee
laptops were configured with VPN software so that they could securely access the company’s
information systems when they worked at home or while traveling on business. The CISO had used GAPP
to develop procedures to protect personal information collected from customers.
Case
The U.S.
Department of Education
established the Privacy
Technical Assistance
Center (PTAC) as a “one-stop”
resource for education
stakeholders to learn about data privacy, confidentiality, and security practices related to student-level
longitudinal data systems and other uses of student data. PTAC provides timely information and updated
guidance through a variety of resources, including training materials and opportunities to receive direct
assistance with privacy, security, and confidentiality of student data systems.
In December 2011, the U.S. Department of Education (Department or we) released new
regulations governing the Family Educational Rights and Privacy Act (FERPA) and supplemental
nonregulatory guidance. We are providing the following case study to illustrate how specific provisions
of FERPA may be implemented. This case study uses fictional agencies, does not address individual
circumstances, and does not consider additional legal requirements that may be required under other
Federal, state, or local laws.
The state education agency (SEA) in State X operates a statewide longitudinal data system (SLDS)
that contains a large quantity of personally identifiable information (PII) from students’ K-12 and
postsecondary education records, which are protected from unauthorized disclosure by the Family
Educational Rights and Privacy Act (FERPA).
In order to minimize access to sensitive information within the SLDS, the SEA follows data
minimization best practices and implements role-based access controls on all student-level information.
SEA employees’ levels of access are determined by their job functions and responsibilities, in accordance
with State X’s SLDS data governance plan, and are implemented through appropriate physical and
information technology (IT) security controls. Because the data collected and maintained by the SEA are
also made available to external researchers and published in a variety of public reports (see below), the
data governance plan also establishes access controls and disclosure avoidance measures for external
dissemination of the data. The levels of access and their corresponding data minimization procedures
identified by the SEA are as follows:
Raw Individual Student Data (contains direct identifiers, including Social Security Numbers
[SSNs])
Integrating students’ records into the SLDS requires the use of a number of direct identifiers
(typically student’s name, address, parents’ names, and student’s SSN or other unique student ID
number) to identify specific students’ records in datasets from different sources, and to link
those records together longitudinally
Redacted Individual Student Data (direct identifiers have been removed)
Redacting the direct identifiers reduces the overall sensitivity of the file. However, the redacted
data file still contains PII, in the form of indirect identifiers (e.g., date of birth) and other
identifying characteristics (e.g., race, gender, and disability status), data on education program
participation, and on the student’s teacher(s) that could be used to re-identify specific
individuals. Consequently, the data are still protected by FERPA. Most of the statistical analysis
performed by the SEA’s employees is done using this redacted file.
Aggregate Data Tables (the need to protect small cells)
To meet legal requirements, the SEA must publish various school and student performance
indicators in aggregate tables. For example, the SEA uses data in the SLDS to construct aggregate
data tables of student achievement broken down by various subgroups. Because many of these
aggregate data tables contain information for small subgroups, the tables contain numerous cells
with only one or two students in them.
Public Aggregate Data Tables (disclosure avoidance measures have been applied)
In order to release the aggregate data tables to the public, the SEA must perform disclosure
avoidance analyses on the tables to identify potential disclosures, and then apply disclosure
avoidance techniques to mitigate the risk that a reasonable person in the school community
could identify specific students within the small cells of the tables. In this case, the SEA in State X
decides to accomplish this by utilizing a disclosure avoidance technique known as
“complementary cell suppression,” whereby all cells in the table that fall below a particular
threshold chosen by SEA are suppressed.
De-identified Individual Student Data (disclosure avoidance measures have been applied)
After publishing the public aggregate data tables on its website, the SEA receives a number of
requests from researchers and advocacy groups requesting additional data. These requestors
explain that the public tables indicate that there may be some interesting trends in the data, and
that they want to perform more extensive analyses on the student-level data. Recognizing the
potential public value of these evaluations, the SEA decides to create a public-use version of the
file. To accomplish this, the SEA takes the redacted individual student data file and removes or
blurs any remaining indirect identifiers (e.g., replacing date of birth with year of birth). To de-
identify the data further, the SEA then applies additional disclosure avoidance on the data, in this
case by performing a perturbation technique, such as “swapping” (in which a statistical
algorithm is used to swap data elements for a small number of individuals).
FERPA permits the SEA’s employees and authorized representatives to access PII from education
records to audit or evaluate Federally- or state-supported education programs and requires that all PII
from education records be adequately protected from inadvertent or unauthorized redisclosure and
destroyed when no longer needed for the purposes of the evaluation. Using the FERPA requirements as a
minimum, it is then a widely accepted best practice for SEAs to adopt broad data minimization practices
and to apply additional restrictions and protections to those data, files, or systems containing PII
elements generally considered to have higher potential for harm or misuse, like SSN and other direct
identifiers.