Principles of Computer Security
Written Assignment should follow the rules to get full marks
| Content |
|
||||||
| Writing Skills |
|
||||||
| Formatting Skills |
|
||||||
| References Supplied |
|
||||||
| Ontime Submission |
|
Written Assignment should follow the rules to get full marks
| Content |
|
||||||
| Writing Skills |
|
||||||
| Formatting Skills |
|
||||||
| References Supplied |
|
||||||
| Ontime Submission |
|
this is about databases questions , maybe i miss copy some option D, if ABC there are all incorrecct please type D after that question thank you
Suppose that a PRODUCT table contains two attributes, PROD_CODE and VEND_CODE. Those two attributes have values of ABC, 125, DEF, 124, GHI, 124, and JKL, 123, respectively. The VENDOR table contains a single attribute, VEND_CODE, with values 123, 124, 125, and 126, respectively. (The VEND_CODE attribute in the PRODUCT table is a foreign key to the VEND_CODE in the VENDOR table.) Given that information, what would be the query output for a INTERSECT query based on these two tables?
|
|||
|
|||
|
|||
|
What is the difference between UNION and UNION ALL?
|
|||
|
|||
|
|||
|
A(n) ______________ is a block of PL/SQL code that is automatically invoked by the DBMS upon the occurrence of a data manipulation event (INSERT, UPDATE or DELETE.)
|
|||
|
|||
|
|||
|
__________________ means that the relations yield attributes with identical names and compatible data types.
|
|||
|
|||
|
|||
|
Which of the following a parts of the definition of a trigger?
|
|||
|
|||
|
|||
|
Which of the following relational set operators does NOT require that the relations are union-compatible?
|
|||
|
|||
|
|||
|
Suppose that you have two tables, EMPLOYEE and EMPLOYEE_1. The EMPLOYEE table contains the records for three employees: Alice Cordoza, John Cretchakov, and Anne McDonald. The EMPLOYEE_1 table contains the records for employees John Cretchakov and Mary Chen. Given that information, what is the query output for the INTERSECT query?
|
|||
|
|||
|
|||
|
A _____________________ is a join that performs a relational product (or Cartesian product) of two tables.
|
|||
|
|||
|
|||
|
What Oracle function should you use to calculate the number of days between the current date and January 25, 1999?
|
|||
|
|||
|
|||
|
Using tables named T1 and T2, write a query example for a LEFT OUTER JOIN, assuming that T1 and T2 share a common column named C1.
|
|||
|
|||
|
|||
|
Suppose that you have two tables, EMPLOYEE and EMPLOYEE_1. The EMPLOYEE table contains the records for three employees: Alice Cordoza, John Cretchakov, and Anne McDonald. The EMPLOYEE_1 table contains the records for employees John Cretchakov and Mary Chen. Given that information, what is the query output for the MINUS query (specifically, SELECT * FROM EMPLOYEE MINUS SELECT * FROM EMPLOYEE_1)?
|
|||
|
|||
|
|||
|
What Oracle function should you use to return the current date?
|
|||
|
|||
|
|||
Suppose that a PRODUCT table contains two attributes, PROD_CODE and VEND_CODE. Those two attributes have values of ABC, 125, DEF, 124, GHI, 124, and JKL, 123, respectively. The VENDOR table contains a single attribute, VEND_CODE, with values 123, 124, 125, and 126, respectively. (The VEND_CODE attribute in the PRODUCT table is a foreign key to the VEND_CODE in the VENDOR table.) Given that information, what would be the query output for a UNION ALL query based on these two tables?
|
|||
|
|||
|
|||
|
_________________is a term used to refer to SQL statements that are contained within an application programming language such as COBOL, C++, ASP, Java, or ColdFusion.
|
|||
|
|||
|
|||
The order of the operands (tables) matter in a _______ query.
|
|||
|
|||
|
|||
|
Which of the following is true of Oracle sequences?
|
|||
|
|||
|
|||
Which of the following is true of Oracle sequences?
|
|||
|
|||
|
|||
|
A(n) ______________ is a named collection of procedural and SQL statements that are stored in the database and that can be used to encapsulate and represent business transactions.
|
|||
|
|||
|
|||
Suppose that you have two tables, EMPLOYEE and EMPLOYEE_1. The EMPLOYEE table contains the records for three employees: Alice Cordoza, John Cretchakov, and Anne McDonald. The EMPLOYEE_1 table contains the records for employees John Cretchakov and Mary Chen. Given that information, what is the query output for the UNION query?
|
|||
|
|||
|
|||
|
A subquery can appear in which of the following places in a SQL statement?
|
|||
|
|||
|
|||
|
A __________ is a query (expressed as a SELECT statement) that is located inside another query and is normally executed first.
|
|||
|
|||
|
|||
|
What string function (in Oracle) should you use to list the first three characters of a company’s EMP_LNAME values using a table named EMPLOYEE?
|
|||
|
|||
|
|||
|
_____________ is a term used to describe an environment in which the SQL statement is not known in advance; instead, the SQL statement is generated at run time.
|
|||
|
|||
|
|||
|
Suppose that a PRODUCT table contains two attributes, PROD_CODE and VEND_CODE. Those two attributes have values of ABC, 125, DEF, 124, GHI, 124, and JKL, 123, respectively. The VENDOR table contains a single attribute, VEND_CODE, with values 123, 124, 125, and 126, respectively. (The VEND_CODE attribute in the PRODUCT table is a foreign key to the VEND_CODE in the VENDOR table.) Given that information, what would be the query output for a MINUS query (VENDOR MINUS PRODUCT) based on these two tables?
|
|||
|
|||
|
|||
|
In the relational model, SQL operators are ________________ because they operate over entire sets of rows and columns (or tables) at once.
|
|||
|
|||
|
|||
Which of the followings explains the difference between a regular subquery and a correlated subquery?
|
|||
|
|||
|
|||
|
If you do not specify a join condition when joining tables, the result will be a ______________ or PRODUCT operation.
|
|||
|
|||
|
|||
|
A ______________ is a subquery that executes once for each row in the outer query; it will run the outer query first, and then it will run the inner subquery once for each row returned in the outer subquery.
|
|||
|
|||
|
|||
A relational view has which of the following characteristics?
|
|||
|
|||
|
|||
A(n) ______________ is a special type of object that generates unique numeric values in ascending or descending order; it can be used to assign values to a primary key field in a table and it provides functionality similar to the Autonumber data type in MS Access.
|
|||
|
|||
|
|||
|
which of the following is NOT considered an advantage of a stored procedure?
|
|||
|
|||
|
|||
|
What are the types of results a subquery can return?
|
|||
|
|||
|
|||
|
The SQL standard prescribes three different types of __________ operations: LEFT, RIGHT, and FULL.
|
|||
|
|||
|
|||
A(n) __________________ is a type of JOIN operation that yields all rows with matching values in the join columns as well as all unmatched rows ( those without matching values in the join columns).
|
|||
|
|||
|
|||
|
Suppose that a PRODUCT table contains two attributes, PROD_CODE and VEND_CODE. Those two attributes have values of ABC, 125, DEF, 124, GHI, 124, and JKL, 123, respectively. The VENDOR table contains a single attribute, VEND_CODE, with values 123, 124, 125, and 126, respectively. (The VEND_CODE attribute in the PRODUCT table is a foreign key to the VEND_CODE in the VENDOR table.) Given that information, what would be the query output for a UNION query based on these two tables?
|
|||
|
|||
|
|||
|
Triggers are critical to proper database operation and management in which of the following ways?
|
|||
|
|||
|
|||
|
A(n) ________ is a virtual table based on a SELECT query.
|
|||
|
|||
|
|||
Suppose that you have two tables, EMPLOYEE and EMPLOYEE_1. The EMPLOYEE table contains the records for three employees: Alice Cordoza, John Cretchakov, and Anne McDonald. The EMPLOYEE_1 table contains the records for employees John Cretchakov and Mary Chen. Given that information, what is the query output for the UNION ALL query?
|
|||
|
|||
|
|||
|
The ______________________ will yield all rows with matching values in the join columns, plus all of the unmatched rows from the right table.
|
|||
|
|||
|
|||
|
| Use the following scenario for questions from this chapter:
You have been given a database for a small charity used to track donations made to it. It has the following structure: and the following sample data in the tables: Donor
ReceiptType
Fund
Receipt
For reporting purposes the client would like you to create a temporary table called “FundSummary” that contains the fund id, donor id, the number of receipts (donations) made to the fund by a donor, and the total of the receipts and the donor. The new table from the sample data would look like:
Fill in the blanks of the SQL Statements: Fund_Id VARCHAR(10) , INSERT INTO FundSummary
Fill in the blanks with words that would best complete the passage. donor_id FundSummary receipt_amount ( not null select sum donor_id primary by group CREATE key donor not fund_id fund count null TABLE |
Use the following scenario for questions from this chapter:
You have been given a database for a small charity used to track donations made to it. It has the following structure:
and the following sample data in the tables:
Donor
| Donor_Id | Donor_ FirstName |
Donor_ LastName |
Donor_Address | Donor_City | Donor_State | Donor_ZipCode | Donor_Phone | Donor_Email |
| 101 | James | James | 123 Mockingbird Place | Peoria | IL | 55556 | 555-555-2342 | jj343434@somewhere.com |
| 175 | Joseph | Mays | 54321 7th St | Atlantic City | NJ | 15678 | 555-555-9877 | jojo9@somewhere.com |
| 207 | Susan | Ames | 777 Main St | Burlington | KY | 41098 | 555-555-3478 | amess@elsewhere.com |
| 303 | Nancy | Zornes | P.O. Box 88776 | Peoria | IL | 55578-8776 | 555-555-1255 | zornes98@nowhere.com |
ReceiptType
| ReceiptType_Id | ReceiptType_Description |
| C | Cash |
| CK | Check/Money Order |
| CC | Credit Card |
| PD | Payroll Deduction |
| A | Art / Collectible |
| I | In-kind |
Fund
| Fund_Id | Fund_Name |
| G | General Operation |
| S | Scholarship |
| B | Building Maintenance |
| C | Capital Campaign |
| Receipt_Id | Donor_Id | Receipt_Date | ReceiptType_Id | Fund_Id | Receipt_Amount | Receipt_Description |
| 1001 | 101 | 2015-01-05 | CK | G | 100 | |
| 1002 | 207 | 2015-01-05 | C | S | 250 | For: Virginia Wolfe Wilde |
| 1003 | 207 | 2015-01-05 | C | B | 100 | |
| 1004 | 175 | 2015-01-06 | CC | G | 137.5 | In Memory of Bob |
| 1005 | 101 | 2015-02-14 | CK | G | 100 | |
| 1006 | 175 | 2015-02-20 | A | C | 15000000 | Picasso Painting |
Receipt
The client wants a listing of donor id, last name, first name, receipt date, type, and amount for all receipts greater than $100.00. The client wants the result sorted by donor last name, first name, and the donation date. The query result from the sample data would look like:
| 207 | Ames | Susan | 2015-01-05 | C | 250 |
| 175 | Mays | Joseph | 2015-01-06 | CC | 137.5 |
| 175 | Mays | Joseph | 2015-02-20 | A | 15000000 |
Fill in the blanks of the SQL Statement:
Donor. , Donor_LastName, Donor_FirstName, Receipt_Date,
Receipt.ReceiptType_Id, Receipt_Amount
Receipt
Receipt.Donor_Id =
Receipt_Amount 100.00
Donor_LastName, Donor_FirstName, Receipt_Date;
Fill in the blanks with words that would best complete the passage.
from
>
donor.donor_id
select
donor
where
and
order
by
donor_id
,
The problems for this chapter use a database for a simple department store that sells items to customers and wants to keep track of the invoices, the selling price (if an item is on sale), and the sales tax (7%) to be collected on some items. Every customer and invoice are assigned unique numbers. All items have a Universal Product Code (UPC) number and bar-code assigned to each unique item. Food and non-carbonated beverages are not taxed, but clothing, home goods, and most other items are.
The structure of the tables are described in the following crows foot ERD:
Sample Data for the tables follow:
Customer
| Customer_Id | Customer_ FirstName |
Customer_ LastName |
Customer_ Address |
Customer_ City |
Customer_ State |
Customer_ ZipCode |
Customer_ Phone |
Customer_Email |
| 342 | Linda | Spangler | 2323 Roanoke Pk | Floyd | VA | 24987 | 555-555-5646 | linda5646@nowhere.com |
| 505 | Rodney | Ray | 12399 27th Ave | New York | NY | 10097 | 555-555-0909 | rayray7@somewhere.com |
| 776 | Nancy | Reno | P.O.Box 98 | Carter City | KY | 41155 | 555-555-2342 | puppylove8@elsewhere.com |
| 987 | Gustov | Jones | 333 East Main St | Jamestown | VA | 23099 | 555-555-9876 | gustov99@somewhere.com |
ItemType
| ItemType_Id | ItemType_Description |
| W | Woman’s Clothing |
| M | Men’s Clothing |
| WA | Woman’s Accessories |
| MA | Men’s Accessories |
| A | General Accessories |
| O | Other |
ItemSize
| ItemSize_Id | ItemSize_Description |
| XS | Extra Small |
| S | Small |
| M | Medium |
| L | Large |
| XL | Extra Large |
Item
| UPC | Item_Description | ItemType_Id | ItemSize_Id | Item_Price | Item_Taxable |
| 012345234569 | Cream Blouse | W | S | 29.95 | 1 |
| 012345234576 | Cream Blouse | W | M | 29.95 | 1 |
| 012345234588 | Cream Blouse | W | L | 29.95 | 1 |
| 012345234590 | Cream Blouse | W | XL | 29.95 | 1 |
| 012345234468 | Blue Blouse | W | S | 29.95 | 1 |
| 012345234475 | Blue Blouse | W | M | 29.95 | 1 |
| 012345234491 | Blue Blouse | W | XL | 29.95 | 1 |
| 012345224889 | 12 Inch Pearl Necklace | WA | 345.95 | 1 | |
| 012345224126 | 10 Inch Pearl Necklace | WA | 298.95 | 1 | |
| 012345334678 | Explorer Cargo Shorts | M | S | 33.45 | 1 |
| 012345334734 | Explorer Cargo Shorts | M | M | 33.45 | 1 |
| 012345334795 | Explorer Cargo Shorts | M | L | 33.45 | 1 |
| 012345334889 | Explorer Cargo Shorts | M | XL | 33.45 | 1 |
| 012345335101 | Pink Silk Tie | MA | 67.55 | 1 | |
| 012345335303 | Pink and Green Silk Tie | MA | 67.55 | 1 | |
| 012345999001 | Yummy Bottled Water | O | 1.29 | 0 |
Invoice
| Invoice_Number | Customer_Id | Invoice_Date | Invoice_Taxable | Invoice_NonTaxable | Invoice_SalesTax | Invoice_Total |
| 10101 | 987 | 2015-07-27 | 29.95 | 2.58 | 2.1 | 34.63 |
| 10102 | 505 | 2015-07-27 | 33.45 | 0 | 2.34 | 35.79 |
| 10107 | 505 | 2015-07-28 | 59.99 | 1.29 | 4.2 | 65.48 |
| 10111 | 342 | 2015-07-28 | 262.89 | 0 | 18.4 | 281.29 |
InvoiceDetail
| Invoice_Number | UPC | Detail_Quantity | Detail_RegularPrice | Detail_SellingPrice |
| 10101 | 012345334795 | 1 | 33.45 | 29.95 |
| 10101 | 012345999001 | 2 | 1.29 | 1.29 |
| 10102 | 012345334889 | 1 | 33.45 | 33.45 |
| 10107 | 012345335303 | 1 | 67.55 | 59.99 |
| 10107 | 012345999001 | 1 | 1.29 | 1.29 |
| 10111 | 012345234576 | 1 | 29.95 | 29.95 |
| 10111 | 012345234475 | 1 | 29.95 | 29.95 |
| 10111 | 012345224126 | 1 | 298.95 | 202.99 |
Suppose that we have a second table with vendor information (sample is below) in it and that we want to create a single telephone directory with both vendor and customer information in it.
| Vendor_Id | Vendor_CompanyName | Vendor_Address | Vendor_City | Vendor_State | Vendor_ZipCode | Vendor_Phone | Vendor_Email |
| 101 | QRS Importers | 12345 Dock St | San Fransisco | CA | 97654 | 555-544-4444 | bob@importeverythingsf.com |
| 505 | ABC Supply | 505 Euclid Ave | Lexington | KY | 40505 | 555-505-0505 | sales@abcsupplylex.com |
The phone directory should contain the state, a name column with either the customer last name and first name concatenated with a comma or the vendor company name, city, and phone number. Output should be sorted by state then by name. Your results should look like:
| State | Name | City | Phone |
| CA | QRS Importers | San Fransisco | 555-544-4444 |
| KY | ABC Supply | Lexington | 555-505-0505 |
| KY | Reno, Nancy | Carter City | 555-555-2342 |
| NY | Ray, Rodney | New York | 555-555-0909 |
| VA | Jones, Gustov | Jamestown | 555-555-9876 |
| VA | Spangler, Linda | Floyd | 555-555-5646 |
Fill in the blanks (remember a blank, drop “blank” as answer)
SELECT , , ,
FROM ( SELECT || ‘, ‘ || Customer_FirstName AS Name,
Customer_City , Customer_State AS State, Customer_Phone AS Phone
FROM Customer
SELECT Vendor_CompanyName AS Name,
Vendor_City AS City, Vendor_State AS State, Vendor_Phone AS Phone
FROM Vendor )
State, Name;
Fill in the blanks with words that would best complete the passage.
blank
Phone
city
City
Customer_LastName
blank
Name
AS
BY
UNION
State
ORDER
The problems for this chapter use a database for a simple department store that sells items to customers and wants to keep track of the invoices, the selling price (if an item is on sale), and the sales tax (7%) to be collected on some items. Every customer and invoice are assigned unique numbers. All items have a Universal Product Code (UPC) number and bar-code assigned to each unique item. Food and non-carbonated beverages are not taxed, but clothing, home goods, and most other items are.
The structure of the tables are described in the following crows foot ERD:
Sample Data for the tables follow:
Customer
| Customer_Id | Customer_ FirstName |
Customer_ LastName |
Customer_ Address |
Customer_ City |
Customer_ State |
Customer_ ZipCode |
Customer_ Phone |
Customer_Email |
| 342 | Linda | Spangler | 2323 Roanoke Pk | Floyd | VA | 24987 | 555-555-5646 | linda5646@nowhere.com |
| 505 | Rodney | Ray | 12399 27th Ave | New York | NY | 10097 | 555-555-0909 | rayray7@somewhere.com |
| 776 | Nancy | Reno | P.O.Box 98 | Carter City | KY | 41155 | 555-555-2342 | puppylove8@elsewhere.com |
| 987 | Gustov | Jones | 333 East Main St | Jamestown | VA | 23099 | 555-555-9876 | gustov99@somewhere.com |
ItemType
| ItemType_Id | ItemType_Description |
| W | Woman’s Clothing |
| M | Men’s Clothing |
| WA | Woman’s Accessories |
| MA | Men’s Accessories |
| A | General Accessories |
| O | Other |
ItemSize
| ItemSize_Id | ItemSize_Description |
| XS | Extra Small |
| S | Small |
| M | Medium |
| L | Large |
| XL | Extra Large |
Item
| UPC | Item_Description | ItemType_Id | ItemSize_Id | Item_Price | Item_Taxable |
| 012345234569 | Cream Blouse | W | S | 29.95 | 1 |
| 012345234576 | Cream Blouse | W | M | 29.95 | 1 |
| 012345234588 | Cream Blouse | W | L | 29.95 | 1 |
| 012345234590 | Cream Blouse | W | XL | 29.95 | 1 |
| 012345234468 | Blue Blouse | W | S | 29.95 | 1 |
| 012345234475 | Blue Blouse | W | M | 29.95 | 1 |
| 012345234491 | Blue Blouse | W | XL | 29.95 | 1 |
| 012345224889 | 12 Inch Pearl Necklace | WA | 345.95 | 1 | |
| 012345224126 | 10 Inch Pearl Necklace | WA | 298.95 | 1 | |
| 012345334678 | Explorer Cargo Shorts | M | S | 33.45 | 1 |
| 012345334734 | Explorer Cargo Shorts | M | M | 33.45 | 1 |
| 012345334795 | Explorer Cargo Shorts | M | L | 33.45 | 1 |
| 012345334889 | Explorer Cargo Shorts | M | XL | 33.45 | 1 |
| 012345335101 | Pink Silk Tie | MA | 67.55 | 1 | |
| 012345335303 | Pink and Green Silk Tie | MA | 67.55 | 1 | |
| 012345999001 | Yummy Bottled Water | O | 1.29 | 0 |
Invoice
| Invoice_Number | Customer_Id | Invoice_Date | Invoice_Taxable | Invoice_NonTaxable | Invoice_SalesTax | Invoice_Total |
| 10101 | 987 | 2015-07-27 | 29.95 | 2.58 | 2.1 | 34.63 |
| 10102 | 505 | 2015-07-27 | 33.45 | 0 | 2.34 | 35.79 |
| 10107 | 505 | 2015-07-28 | 59.99 | 1.29 | 4.2 | 65.48 |
| 10111 | 342 | 2015-07-28 | 262.89 | 0 | 18.4 | 281.29 |
InvoiceDetail
| Invoice_Number | UPC | Detail_Quantity | Detail_RegularPrice | Detail_SellingPrice |
| 10101 | 012345334795 | 1 | 33.45 | 29.95 |
| 10101 | 012345999001 | 2 | 1.29 | 1.29 |
| 10102 | 012345334889 | 1 | 33.45 | 33.45 |
| 10107 | 012345335303 | 1 | 67.55 | 59.99 |
| 10107 | 012345999001 | 1 | 1.29 | 1.29 |
| 10111 | 012345234576 | 1 | 29.95 | 29.95 |
| 10111 | 012345234475 | 1 | 29.95 | 29.95 |
| 10111 | 012345224126 | 1 | 298.95 | 202.99 |
You have been tasked to generate a report from the database using a single SQL statement to do the following:
List all items with the number of invoices that the items have been on and the total of the sales of that item. Display in order by type, description, and size.
Your generated output should look like:
| UPC | Item_Description | ItemType_Id | ItemSize_Id | Invoice_Count | Total_Sales |
| 012345334795 | Explorer Cargo Shorts | M | L | 1 | 1 |
| 012345334734 | Explorer Cargo Shorts | M | M | 0 | |
| 012345334678 | Explorer Cargo Shorts | M | S | 0 | |
| 012345334889 | Explorer Cargo Shorts | M | XL | 1 | 1 |
| 012345335101 | Pink Silk Tie | MA | 0 | ||
| 012345335303 | Pink and Green Silk Tie | MA | 1 | 1 | |
| 012345999001 | Yummy Bottled Water | O | 2 | 3 | |
| 012345234475 | Blue Blouse | W | M | 1 | 1 |
| 012345234468 | Blue Blouse | W | S | 0 | |
| 012345234491 | Blue Blouse | W | XL | 0 | |
| 012345234588 | Cream Blouse | W | L | 0 | |
| 012345234576 | Cream Blouse | W | M | 1 | 1 |
| 012345234569 | Cream Blouse | W | S | 0 | |
| 012345234590 | Cream Blouse | W | XL | 0 | |
| 012345224126 | 10 Inch Pearl Necklace | WA | 1 | 1 | |
| 012345224889 | 12 Inch Pearl Necklace | WA | 0 |
Fill in the blanks (remember a blank, drop “blank” as answer)
SELECT Item.UPC, Item.Item_Description, Item.ItemType_Id,
Item.ItemSize_Id, COUNT(Invoice.Invoice_Number) AS Invoice_Count,
SUM(InvoiceDetail.Detail_Quantity) AS Total_Sales
FROM item
InvoiceDetail ON item. = .UPC
Invoice ON .Invoice_Number = InvoiceDetail.
GROUP BY Item.UPC
ORDER BY Item.ItemType_id, Item.Item_Description, Item.ItemSize_Id;
Fill in the blanks with words that would best complete the passage.
InvoiceDetail
LEFT
JOIN
blank
UPC
LEFT
Invoice
Invoice_Number
JOIN
Which of the following is the data dictionary’s function in database design?
|
|||
|
|||
|
|||
|
In the decentralized conceptual database design approach, the aggregation process requires the lead designer to assemble a single model where which of the following aggregation problems must be addressed?
|
|||
|
|||
|
|||
|
______________ design begins by identifying the different entity types and the definition of each entity’s attributes.
|
|||
|
|||
|
|||
|
_______________ is the last stage in the database design process.
|
|||
|
|||
|
|||
|
Which conceptual database design is best suited to relatively small and simple databases that lend themselves well to a bird’s eye view of the entire database and may be designed by a single person or by a small and informally constituted design team?
|
|||
|
|||
|
|||
|
A(n) ________________________ is a system that provides for data collection, storage, and retrieval; facilitates the transformation of data into information; and manages both data and information. It is composed of hardware, the DBMS and other software, database(s), people, and procedures.
|
|||
|
|||
|
|||
|
____________are narrative descriptions of the business policies, procedures, or principles that are derived from a detailed description of operations.
|
|||
|
|||
|
|||
|
The DBLC is composed of _____ phases.
|
|||
|
|||
|
|||
|
Business rules are particularly valuable to database designers, because they help define which of the following?
|
|||
|
|||
|
|||
|
The _________________ goal is to design an enterprise-wide database that is based on a specific data model but independent of physical-level details.
|
|||
|
|||
|
|||
|
Which of the following is NOT a step performed in the physical design stage in the database design process?
|
||||||||
|
||||||||
|
||||||||
Which of the following is established during the systems design phase, in which the designer completes the design of all required system processes?
|
|||
|
|||
|
|||
|
Which of the following is established during the systems design phase, in which the designer completes the design of all required system processes?
|
|||
|
|||
|
|||
|
The conceptual design is composed of _____ steps.
|
|||
|
|||
|
|||
|
Ultimately, the purpose of an _____________________ is to facilitate good decision making by making relevant and timely information available to the decision makers.
|
|||
|
|||
|
|||
|
Which conceptual database design is best when company operations are spread across multiple operational sites or when the database has multiple entities that are subject to complex relations?
|
|||
|
|||
|
|||
|
SDLC is the acronym that is used to label the _______________________________.
|
|||
|
|||
|
|||
|
A(n) ______________________ backup of the database creates a backup of only those database objects that have changed since the last full backup.
|
|||
|
|||
|
|||
|
Which of the following shows how systems analysis fits into a discussion about information systems?
|
|||
|
|||
|
|||
|
DBLC is the acronym that is used to label the _______________________________.
|
|||
|
|||
|
|||
|
The SDLC is composed of _____ phases.
|
|||
|
|||
|
|||
|
Which of the following is NOT a step performed in the logical design stage in the database design process?
|
|||
|
|||
|
|||
|
A(n) _____________ is the use of different names to identify the same object, such as an entity, an attribute, or a relationship.
|
|||
|
|||
|
|||
|
Database design must yield a database that does which of the following?
|
|||
|
|||
|
|||
|
__________ design first defines the required attributes and then groups the attributes to form entities.
|
|||
|
|||
|
|||
|
The ___________________ specifies that all the data defined in the data model are actually required to fit present and expected future data requirements.
|
|||
|
|||
|
|||
|
Which of the following is NOT one of the six (6) steps identified as part of the ER model verification process?
|
|||
|
|||
|
|||
|
How many steps are required in the development of the conceptual model using an ER diagram?
|
|||
|
|||
|
|||
|
Which of the following is NOT an important factor in the selection of a DBMS software product?
|
|||
|
|||
|
|||
|
2.Each of the following activities are commonly performed during the implementation of the Database Life Cycle (DBLC). Fill in the blank, before each activity, with the phase number of the DBLS that this activity would normally be performed.
DBLC Task Numbers:
1. Database initial study
2. Database design
3. Implementation and loading
4. Testing and evaluation
5. Operation
6. Maintenance and evolution
[removed] Load the initial values into the tables
[removed] Finish user documentation
[removed] Adding additional tables, attributes, and indexes
[removed] Attempt to gain unauthorized access to the data
[removed] Interview management
[removed] Convert existing data
[removed] Study the competition’s database
[removed] Plan how to grant different levels of access to different user groups
[removed] Install the database
[removed] Train users
[removed] Changing constraints to match changes in business rules
[removed] Define budget and scope
[removed] Select a DBMS software solution
[removed] Draw a logical ERD
[removed] Performing software patches to the DBMS
[removed] Create the database
[removed] Understand how this database will connect to other databases in the organization
[removed] Develop a Conceptual Model
[removed] Make sure application software updates the database
[removed] Regular security audits
[removed] Define objectives
[removed] Create a detailed model that can be physically implemented
Top of Form
Bottom of Form
Chapter 6 discusses four types of perceptual distortions: stereotyping, halo effects, selective perception, and projection. Define each of these types of perceptual distortions and provide a full example of each perceptual distortion.
For all discussions questions a primary response of 300 word must be posted to the discussion forum, the post must be submitted by Wednesday at midnight. Each student is to post a reply to another students’ posting (minimum 200 words) and must be posted Sunday by Midnight. All late submissions will receive a zero grade.
Requirements (please read)
For each discussion, you are required to write an initial post (300 words) and one secondary post (200 words). The discussion forums will be worth 40 points apiece—25 points for the initial post and 15 points for the secondary post. For your initial and secondary posts, you must have two academic peer-reviewed articles for references. You must get them from the library. There are directions at the top of our Moodle page showing how to utilize the library.
Grading for discussions.
Response -1 (satish)
Perception is the choice and association of ecological data to give significant encounters to the perceiver. It is the way toward comprehending tactile information. Perception fills in as a channel or guard so we are not overpowered by every one of the jolts that assault us. We have to focus on three parts of perception: arranging information, selective consideration, and perceptual predisposition.
We have various perceptual distortions that outcome from our specific method for arranging data and extra core interest. Some normal distortions incorporate halo/horn effects, projection, selective perception and stereotyping.
· Stereo Types: Stereotyping is the very successive aftereffect of quick, programmed perception and attribution forms when we are managing individuals we consider to be not quite the same as us. A stereotype is a misrepresented evaluative conclusion or judgment about a gathering of individuals connected to a person. Stereotyping happens when we characteristic conduct, dispositions, thought processes, or potentially ascribes to a man based on the gathering to which that individual has a place. Because stereotyping is so normal in the public arena does not mean we ought to acknowledge stereotypical relating as unavoidable. Stereotypes have negative results seeing someone at work.
Example: Applying the total characteristics of the old individuals to an old individual.
· Halo Effects: A halo impact happens when one characteristic of a man or circumstance is utilized to build up a general impression of the individual or circumstance. Like stereotypes, these contortions will probably happen in the association phase of perception. Halo effects are basic in our regular day to day existences. When meeting another individual, for instance, a lovely grin can prompt a constructive initial introduction of an in general “warm” and “legit” individual. The consequence of a halo impact is the same as that related with a stereotype, nonetheless: Individual contrasts are clouded.
Example: If it accepted that if a man is grinning then that individual can be considered as a legit individual when contrasted with that of glaring individual.
· Selective perception: Selective perception is the propensity to single out those parts of a circumstance, individual, or question that is reliable with one’s needs, qualities, or mentalities. Its most grounded affect happens in the consideration phase of the perceptual procedure. This perceptual mutilation is recognized in an exemplary research consider including administrators in an assembling organization. At the point when requested to distinguish the key issue in a complete business approach case, every official chose issues predictable with his or her utilitarian zone work assignments. For instance, most advertising officials saw the key issue zone as deals, though creation individuals tended to see the issue as one of generation and association.
Example: If a moderator can see a similar grinning face when the arrival of new data, at that point the mediator can frame his/her own particular perceptions in transaction.
· Projection: Projection is the task of one’s close to home ascribes to different people; it is particularly liable to happen in the translation phase of perception. An exemplary projection blunder is delineated by directors who expect that the requirements of their subordinates and their own particular correspond. Assume, for instance, that you appreciate obligation and accomplishment in your work. Assume that you are the recently delegated administrator of a gathering whose occupations appear to be dull and schedule. You may move rapidly to grow these employments to enable the specialists to accomplish expanded fulfillment from additionally difficult errands since you need them to encounter things that you, by and by, esteem in work.
Example: If delays in negotiations are not all acknowledged by a gathering then it expects that it will display dissatisfaction at the season of undertaking declaration
References:
M. P. Hollier M. O. Hawksford D. R. Guard(1993) Types of Perpetual Distortions, vol. 41 no. 12 pp. 1008-1021.
Response-2 (Dinesh)
Perceptual distortions is nothing but incorrect understanding the things, or in other words it can say abnormal interpretation. Perceptual distortions usually occurs when the things are commonly perceived from others responses to varying with stimuli. We can all always relate things with mental disorders, drugs, sensory organ etc.
Different types of perceptual distortions are mental set, personality, halo effect, stereotype, first impression, attribution and selective perception.
Stereotyping
Its perceive about the group of people belong to one categories, based on characteristic group they belong.
For example, all the Irish ladies are very beautiful and good looking, statement two all Mexicans are pleasing guys. In all cases those two statements won’t be correct. It may be tampered in most of some cases.
Halo Effects
Judging the things based on one characteristics only, if the impressions is created on things that will influence entire community.
For example, we differentiate them with iphone and android users most of people buy android phone because of we can install free apps, easy to use and less in price but iphone doesn’t have those things still buy a iphone saying its iphone dude that so crazy about it.
Selective perception
Selective perception is nothing but perceiving the things what they want and ignoring the things want what they don’t want. Mainly the people like who are human judgement give people or decision making people are doesn’t fit under this one. They always support their things only on one side view only that to certain characteristics only.
For example, in media they will tell about any government both bad and good things, but selective perception people all ways takes only one thing in that one.
Projection Distortion
The way people looking the things, how the people co-relate the things with that corresponding events around them. Projection distortion is nothings but how the people are possessing or expressing their feelings about any things. Mainly people with Projection distortion means they can influence the other people perception very easily. This distortion will come under the personality distortion only.
For Example, when I am trying to capture or accruing others personality behavior means its nothing but a projection.
References
Morris B., H. (1983). Using a Structural Model of Halo Effect to Assess Perceptual Distortion Due to
Affective Overtones. Journal Of Consumer Research, (2), 247.
Negotiation, Lewicki, Barry, & Sanders, McGraw -Hill, 7th Edition (2015) ISBN 978-0- 07-802944-9
Thomas, A. K., & Dubois, S. J. (2011). Reducing the burden of stereotype threat eliminates age
differences in memory distortion. Psychological Science, 22(12), 1515-1517.
Sara Baase San Diego State University
Boston Columbus Indianapolis New York San Francisco Upper Saddle River Amsterdam Cape Town Dubai London Madrid Milan Munich Paris Montreal Toronto Delhi Mexico City Sao Paulo Sydney Hong Kong Seoul Singapore Taipei Tokyo
Editorial Director Marcia Horton Executive Editor Tracy Johnson Associate Editor Carole Snyder Editorial Assistant Jenah Blitz-Stoehr Director of Marketing Christy Lesko Marketing Manager Yez Alayan Marketing Coordinator Kathryn Ferranti Director of Production Erin Gregg Managing Editor Jeff Holcomb Production Project Manager Kayla Smith-Tarbox Operations Supervisor Nick Skilitis Manufacturing Buyer Lisa McDowell Art Director Anthony Gemmellaro
Cover Designer Anthony Gemmellaro Manager, Visual Research Karen Sanatar Manager, Rights and Permissions Michael Joyce Text Permission Coordinator Danielle Simon Cover Art Crocodile Images/Glow Images,
Yuri Arcurs/AGE Fotostock Lead Media Project Manager Daniel Sandin Full-Service Project Management Windfall Software Composition Windfall Software Printer/Binder R.R. Donnelley Harrisonburg Cover Printer R.R. Donnelley Harrisonburg Text Font Adobe Garamond
Credits and acknowledgements. Excerpt from Mike Godwin speech: at Carnegie Mellon University, November 1994. Copyright © 1994 by Mike Godwin. Reprinted with permission. Excerpt from Jerrold H. Zar’s “Candidate for a Pullet Surprise”: from JOURNAL OF IRREPRODUCIBLE RESULTS, 39, no. 1 (Jan/Feb 1994). Copyright © 1994 Norman Sperling Publishing. Reprinted with permission. Excerpt from “Social and Legal Issues”: From INVITATION TO COMPUTER SCIENCE, 1E by Schneider/Gertsing. Copyright © 1995 South-Western, a part of Cengage Learning, Inc. Reproduced by permission. www.cengage.com/permissions. Appendix A.1: The Software Engineering Code of Ethics and Professional Practice. THE SOFTWARE ENGINEERING CODE OF ETHICS AND PROFESSIONAL PRACTICE © 1999 by the Institute of Electrical and Electronics Engineers, Inc. and the Association for Computing Machinery, Inc. Reprinted by permission. Appendix A.2: The ACM Code of Ethics and Professional Conduct. ACM CODE OF ETHICS AND PROFESSIONAL CONDUCT. Copyright © 1999 by the Association for Computing Machinery, Inc. and the Institute for Electrical and Electronics Engineers, Inc. Reprinted by permission. Adi Kamdar Excerpt: Adi Kamdar, “EFF Denounces Flawed E-Verify Proposal That Would Trample on Worker Privacy,” July 1, 2011, www.eff.org/deeplinks/2011/07/eff-denounces-flawede-verify-proposal, viewed July 31, 2011. Reprinted under the terms of the Creative Commons Attributions License. Calvin and Hobbes “today at school . . . ” cartoon © 1993 Watterson. Reprinted with permission of UNIVERSAL PRESS SYNDICATE. All rights reserved. Calvin and Hobbes “what’s all the fuss about computers . . . ” cartoon © 1995 Watterson. Dist. By UNIVERSAL PRESS SYNDICATE. Reprinted with permission. All rights reserved. “Opus” cartoon used with the permission of Berkeley Breathed and the Cartoonist Group. All rights reserved.
Copyright © 2013, 2008, 2003 by Pearson Education, Inc., publishing as Prentice Hall. All rights reserved. Manufactured in the United States of America. This publication is protected by Copyright, and permission should be obtained from the publisher prior to any prohibited reproduction, storage in a retrieval system, or transmission in any form or by any means, electronic, mechanical, photocopying, recording, or likewise. To obtain permission(s) to use material from this work, please submit a written request to Pearson Education, Inc., Permissions Department, One Lake Street, Upper Saddle River, New Jersey 07458, or you may fax your request to 201-236-3290.
Many of the designations by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and the publisher was aware of a trademark claim, the designations have been printed in initial caps or all caps.
Library of Congress Cataloging-in-Publication Data
Baase, Sara. A gift of fire : social, legal, and ethical issues for computing technology / Sara Baase. — 4th ed.
p. cm. Includes bibliographical references and index. ISBN 978-0-13-249267-6 1. Computers—Social aspects. 2. Computers—Moral and ethical aspects. 3. Internet—Social aspects.
4. Internet—Moral and ethical aspects. I. Title. QA76.9.C66B3 2013 303.48′34—dc23 2012020988
10 9 8 7 6 5 4 3 2 1
ISBN 10: 0-13-249267-9 ISBN 13: 978-0-13-249267-6
To Keith, always
And to Michelle Nygord Matson (1959–2012)
For her love of life, learning, and adventure For her laughter, wisdom, and determination For her friendship
This page intentionally left blank
Contents
Preface xiii
Prologue 1
1 UNWRAPPING THE GIFT 3
1.1 The Pace of Change 4 1.2 Change and Unexpected Developments 6
1.2.1 Connections: Cellphones, Social Networking, and More 7 1.2.2 E-commerce and Free Stuff 15 1.2.3 Artificial Intelligence, Robotics, Sensors, and Motion 17 1.2.4 Tools for Disabled People 21
1.3 Themes 23 1.4 Ethics 26
1.4.1 What Is Ethics, Anyway? 26 1.4.2 A Variety of Ethical Views 28 1.4.3 Some Important Distinctions 36 Exercises 40
2 PRIVACY 47
2.1 Privacy Risks and Principles 48 2.1.1 What Is Privacy? 48 2.1.2 New Technology, New Risks 50 2.1.3 Terminology and Principles for Managing Personal Data 56
2.2 The Fourth Amendment, Expectation of Privacy, and Surveillance Technologies 60 2.2.1 The Fourth Amendment 61 2.2.2 New Technologies, Supreme Court Decisions, and Expectation of
Privacy 63 2.2.3 Search and Seizure of Computers and Phones 66 2.2.4 Video Surveillance and Face Recognition 68
2.3 The Business and Social Sectors 70 2.3.1 Marketing and Personalization 70 2.3.2 Our Social and Personal Activity 75 2.3.3 Location Tracking 79 2.3.4 A Right to Be Forgotten 82
viii Contents
2.4 Government Systems 84 2.4.1 Databases 84 2.4.2 Public Records: Access versus Privacy 90 2.4.3 National ID Systems 91
2.5 Protecting Privacy: Technology, Markets, Rights, and Laws 95 2.5.1 Technology and Markets 95 2.5.2 Rights and Law 100 2.5.3 Privacy Regulations in the European Union 110
2.6 Communications 112 2.6.1 Wiretapping and Email Protection 113 2.6.2 Designing Communications Systems for Interception 115 2.6.3 The NSA and Secret Intelligence Gathering 116 Exercises 119
3 FREEDOM OF SPEECH 133
3.1 Communications Paradigms 134 3.1.1 Regulating Communications Media 134 3.1.2 Free Speech Principles 137
3.2 Controlling Speech 139 3.2.1 Offensive Speech: What Is It? What Is Illegal? 139 3.2.2 Censorship Laws and Alternatives 141 3.2.3 Child Pornography and Sexting 146 3.2.4 Spam 148 3.2.5 Challenging Old Regulatory Structures and Special Interests 152
3.3 Posting, Selling, and Leaking Sensitive Material 153 3.4 Anonymity 159 3.5 The Global Net: Censorship and Political Freedom 163
3.5.1 Tools for Communication, Tools for Oppression 163 3.5.2 Aiding Foreign Censors and Repressive Regimes 165 3.5.3 Shutting Down Communications in Free Countries 168
3.6 Net Neutrality Regulations or the Market? 169 Exercises 171
4 INTELLECTUAL PROPERTY 179
4.1 Principles, Laws, and Cases 180 4.1.1 What Is Intellectual Property? 180 4.1.2 Challenges of New Technologies 182 4.1.3 A Bit of History 185 4.1.4 The Fair Use Doctrine 186 4.1.5 Ethical Arguments About Copying 187 4.1.6 Significant Legal Cases 190
Contents ix
4.2 Responses to Copyright Infringement 196 4.2.1 Defensive and Aggressive Responses From the Content Industries 196 4.2.2 The Digital Millennium Copyright Act: Anticircumvention 201 4.2.3 The Digital Millennium Copyright Act: Safe Harbor 204 4.2.4 Evolving Business Models 206
4.3 Search Engines and Online Libraries 208 4.4 Free Software 211
4.4.1 What Is Free Software? 211 4.4.2 Should All Software Be Free? 213
4.5 Patents for Inventions in Software 214 4.5.1 Patent Decisions, Confusion, and Consequences 215 4.5.2 To Patent or Not? 218 Exercises 220
5 CRIME 229
5.1 Introduction 230 5.2 Hacking 230
5.2.1 What is “Hacking”? 230 5.2.2 Hacktivism, or Political Hacking 236 5.2.3 Hackers as Security Researchers 237 5.2.4 Hacking as Foreign Policy 239 5.2.5 Security 241 5.2.6 The Law: Catching and Punishing Hackers 245
5.3 Identity Theft and Credit Card Fraud 250 5.3.1 Stealing Identities 251 5.3.2 Responses to Identity Theft 253 5.3.3 Biometrics 257
5.4 Whose Laws Rule the Web? 258 5.4.1 When Digital Actions Cross Borders 258 5.4.2 Libel, Speech, and Commercial Law 262 5.4.3 Culture, Law, and Ethics 265 5.4.4 Potential Solutions 266 Exercises 267
6 WORK 275
6.1 Changes, Fears, and Questions 276 6.2 Impacts on Employment 277
6.2.1 Job Destruction and Creation 277 6.2.2 Changing Skills and Skill Levels 282 6.2.3 Telecommuting 284 6.2.4 A Global Workforce 287
x Contents
6.3 Employee Communication and Monitoring 293 6.3.1 Learning About Job Applicants 293 6.3.2 Risks and Rules for Work and Personal Communications 296 Exercises 304
7 EVALUATING AND CONTROLLING TECHNOLOGY 311
7.1 Evaluating Information 312 7.1.1 The Need for Responsible Judgment 312 7.1.2 Computer Models 321
7.2 The “Digital Divide” 329 7.2.1 Trends in Computer Access 329 7.2.2 The Global Divide and the Next Billion Users 331
7.3 Neo-Luddite Views of Computers, Technology, and Quality of Life 332 7.3.1 Criticisms of Computing Technologies 333 7.3.2 Views of Economics, Nature, and Human Needs 336
7.4 Making Decisions About Technology 342 7.4.1 Questions 343 7.4.2 The Difficulty of Prediction 344 7.4.3 Intelligent Machines and Superintelligent Humans—Or the End of the
Human Race? 347 7.4.4 A Few Observations 350 Exercises 350
8 ERRORS, FAILURES, AND RISKS 361
8.1 Failures and Errors in Computer Systems 362 8.1.1 An Overview 362 8.1.2 Problems for Individuals 364 8.1.3 System Failures 367 8.1.4 What Goes Wrong? 375
8.2 Case Study: The Therac-25 377 8.2.1 Therac-25 Radiation Overdoses 377 8.2.2 Software and Design Problems 378 8.2.3 Why So Many Incidents? 380 8.2.4 Observations and Perspective 382
8.3 Increasing Reliability and Safety 383 8.3.1 Professional Techniques 383 8.3.2 Trust the Human or the Computer System? 388 8.3.3 Law, Regulation, and Markets 389
8.4 Dependence, Risk, and Progress 392 8.4.1 Are We Too Dependent on Computers? 392 8.4.2 Risk and Progress 393 Exercises 395
Contents xi
9 PROFESSIONAL ETHICS AND RESPONSIBILITIES 403
9.1 What Is “Professional Ethics”? 404 9.2 Ethical Guidelines for Computer Professionals 405
9.2.1 Special Aspects of Professional Ethics 405 9.2.2 Professional Codes of Ethics 406 9.2.3 Guidelines and Professional Responsibilities 407
9.3 Scenarios 410 9.3.1 Introduction and Methodology 410 9.3.2 Protecting Personal Data 412 9.3.3 Designing an Email System With Targeted Ads 414 9.3.4 Webcams in School Laptops1 415 9.3.5 Publishing Security Vulnerabilities 416 9.3.6 Specifications 417 9.3.7 Schedule Pressures 418 9.3.8 Software License Violation 421 9.3.9 Going Public 422 9.3.10 Release of Personal Information 423 9.3.11 Conflict of Interest 424 9.3.12 Kickbacks and Disclosure 426 9.3.13 A Test Plan 427 9.3.14 Artificial Intelligence and Sentencing Criminals 427 9.3.15 A Gracious Host 430 Exercises 430
Epilogue 437
A THE SOFTWARE ENGINEERING CODE AND THE ACM CODE 439
A.1 Software Engineering Code of Ethics and Professional Practice 439 A.2 ACM Code of Ethics and Professional Conduct 447
Index 455
This page intentionally left blank
Preface
This book has two intended audiences: students preparing for careers in computer science (and related fields) and students in other fields who want to learn about issues that arise from computing technology, the Internet, and other aspects of cyberspace. The book has no technical prerequisites. Instructors can use it at various levels, in both introductory and advanced courses about computing or technology.
Scope of This Book
Many universities offer courses with titles such as “Ethical Issues in Computing” or “Computers and Society.” Some focus primarily on professional ethics for computer professionals. Others address a wide range of social issues. The bulky subtitle and the table of contents of this book indicate its scope. I also include historical background to put some of today’s issues in context and perspective. I believe it is important for students (in computer and information technology majors and in other majors) to see and understand the implications and impacts of the technology. Students will face a wide variety of issues in this book as members of a complex technological society, in both their professional and personal lives.
The last chapter focuses on ethical issues for computer professionals. The basic ethical principles are not different from ethical principles in other professions or other aspects of life: honesty, responsibility, and fairness. However, within any one profession, special kinds of problems arise. Thus, we discuss professional ethical guidelines and case scenarios specific to computing professions. I include two of the main codes of ethics and professional practices for computer professionals in an Appendix. I placed the professional ethics chapter last because I believe students will find it more interesting and useful after they have as background the incidents, issues, and controversies in the earlier chapters.
Each of the chapters in this book could easily be expanded to a whole book. I had to leave out many interesting topics and examples. In some cases, I mention an issue, example, or position with little or no discussion. I hope some of these will spark further reading and debate.
Changes for the Fourth Edition
For this fourth edition, I updated the whole book, removed outdated material, added many new topics and examples, and reorganized several topics. New material appears throughout. I mention here some major changes, completely new sections and topics, and some that I extensively revised.
xiv Preface
. This edition has approximately 85 new exercises.
. In Chapter 1, I added a section on kill switches for smartphone apps, tablets, and so on, i.e., the ability of companies to remotely delete apps and other items from a user’s device (in Section 1.2.1).
. All parts of Section 1.2 have new material, including, for example, uses of smart- phone data and social network data for social research.
. I added a brief section on social contracts and John Rawls’ views on justice and fairness (in Section 1.4.2).
New topics in Chapter 2 include
. smartphones and their apps collecting personal data without permission (in Section 2.1.2)
. Fourth Amendment issues about tracking a person’s location via cellphone, track- ing cars with GPS devices, and search of cellphones (in Sections 2.2.2 and 2.2.3)
. applications of face recognition (several places in the chapter)
. privacy implications of some social networking applications and social network company policies
. a right to be forgotten (Section 2.3.4)
Chapter 3 includes new sections on
. sexting (Section 3.2.3)
. ethics of leaking sensitive information (in Section 3.3)
. shutting down cellphone service or access to social media during riots or protests (Section 3.5.3)
The chapter also has
. use of social media by freedom movements and countermeasures by governments
. more on Western countries selling surveillance systems to dictators.
Chapter 4 includes
. discussion of plagiarism
. expanded sections on the Digital Millennium Copyright Act (Sections 4.2.2 and 4.2.3)
. an expanded section on patents for software (Section 4.5)
Chapter 5 has new sections on
. hacking by governments to attack others (Section 5.2.4)
Preface xv
. expansion of the Computer Fraud and Abuse Act to cover actions it was not intended to cover (in Section 5.2.6)
Chapter 6 has new sections on
. how content of social media can affect getting hired and fired
. use of social media and personal devices at work
Chapter 7 has expanded sections on
. the “wisdom of the crowd”
. ways the Internet can narrow or restrict the points of view people see (in Section 7.1.1)
Chapter 8 has
. an introduction to high reliability organizations (in Section 8.3.1).
Chapter 9 contains
. two new scenarios.
This is an extremely fast-changing field. Clearly, some issues and examples in this book are so current that details will change before or soon after publication. I don’t consider this to be a serious problem. Specific events are illustrations of the underlying issues and arguments. I encourage students to bring in current news reports about relevant issues to discuss in class. Finding so many ties between the course and current events adds to their interest in the class.
Controversies
This book presents controversies and alternative points of view: privacy vs. access to information, privacy vs. law enforcement, freedom of speech vs. control of content on the Net, pros and cons of offshoring jobs, market-based vs. regulatory solutions, and so on. Often the discussion in the book necessarily includes political, economic, social, and philosophical issues. I encourage students to explore the arguments on all sides and to be able to explain why they reject the ones they reject before they take a position. I believe this approach prepares them to tackle new controversies. They can figure out the consequences of various proposals, generate arguments for each side, and evaluate them. I encourage students to think in principles, rather than case by case, or at least to recognize similar principles in different cases, even if they choose to take different positions on them.
My Point of View
Any writer on subjects such as those in this book has some personal opinions, positions, or biases. I believe strongly in the principles in the Bill of Rights. I also have a generally
xvi Preface
positive view of technology. Don Norman, a psychologist and technology enthusiast who writes on humanizing technology, observed that most people who have written books about technology “are opposed to it and write about how horrible it is.”� I am not one of those people. I think that technology, in general, has been a major factor in bringing physical well-being, liberty, and opportunity to hundreds of millions of people. That does not mean technology is without problems. Most of this book focuses on problems. We must recognize and study them so that we can reduce the negative effects and increase the positive ones.
For many topics, this book takes a problem-solving approach. I usually begin with a description of what is happening in a particular area, often including a little history. Next comes a discussion of why there are concerns and what the new problems are. Finally, I give some commentary or perspective and some current and potential solutions to the problems. Some people view problems and negative side effects of new technologies as indications of inherent badness in the technology. I see them as part of a natural process of change and development. We will see many examples of human ingenuity, some that create problems and some that solve them. Often solutions come from improved or new applications of technology.
At a workshop on Ethical and Professional Issues in Computing sponsored by the National Science Foundation, Keith Miller, one of the speakers, gave the following outline for discussing ethical issues (which he credited to a nun who had been one of his teachers years ago): “What? So what? Now what?” It struck me that this describes how I organized many sections of this book.
An early reviewer of this book objected to one of the quotations I include at the beginnings of many sections. He thought it was untrue. So perhaps I should make it clear that I agree with many of the quotations—but not with all of them. I chose some to be provocative and to remind students of the variety of opinions on some of the issues.
I am a computer scientist, not an attorney. I summarize the main points of many laws and legal cases and discuss arguments about them, but I do not give a comprehensive legal analysis. Many ordinary terms have specific meanings in laws, and often a difference of one word can change the impact of a provision of a law or of a court decision. Laws have exceptions and special cases. Any reader who needs precise information about how a law applies in a particular case should consult an attorney or read the full text of laws, court decisions, and legal analysis.
Class Activities
The course I designed in the Computer Science Department at San Diego State Uni- versity requires a book report, a term paper, and an oral presentation by each student. Students do several presentations, debates, and mock trials in class. The students are very
� Quoted in Jeannette DeWyze, “When You Don’t Know How to Turn On Your Radio, Don Norman Is On Your Side,” The San Diego Reader , Dec. 1, 1994, p. 1.
Preface xvii
enthusiastic about these activities. I include several in the Exercises sections, marked as Class Discussion Exercises. Although I selected some exercises for this category, I find that many others in the General Exercises sections are also good for lively class discussions.
It has been an extraordinary pleasure to teach this course. At the beginning of each semester, some students expect boredom or sermons. By the end, most say they have found it eye-opening and important. They’ve seen and appreciated new arguments, and they understand more about the risks of computer technology and their own responsibilities. Many students send me news reports about issues in the course long after the semester is over, sometimes after they have graduated and are working in the field.
Additional Sources
The notes at the ends of the chapters include sources for specific information in the text and, occasionally, additional information and comment. I usually put one endnote at or near the end of a paragraph with sources for the whole paragraph. In a few places the endnote for a section is on the section heading. (We have checked all the Web addresses, but files move, and inevitably some will not work. Usually a search on the author and a phrase from the title of a document will locate it.) The lists of references at the ends of the chapters include some references that I used, some that I think are particularly useful or interesting for various reasons, and some that you might not find elsewhere. I have made no attempt to be complete.
An italic page number in the index indicates the page on which the index entry is defined or explained. The text often refers to agencies, organizations, and laws by acronyms. If you look up the acronym in the index, you will find its expansion.
My website for this book (www-rohan.sdsu.edu/faculty/giftfire) contains updates on topics in the book and other resources. Pearson Education maintains a website (www .pearsonhighered.com/baase) with supplements for instructors, including PowerPoint slides and a testbank. For access to instructor material, please contact your Pearson Education sales representative or visit the site, where you will find instructions.
Feedback
This book contains a large amount of information on a large variety of subjects. I have tried to be as accurate as possible, but, inevitably, there will be errors. I appreciate corrections. Please send them to me at GiftOfFire@sdsu.edu.
Acknowledgments
I am grateful to many people who provided assistance for this edition: Susan Brown (Florida Atlantic University) for advice about citations; Charles Christopher for regularly sending me legal articles perfectly targeted to topics I am writing about; Mike Gallivan (Georgia State University) for checking the Web addresses in endnotes; Julie Johnson (Vanderbilt University) for research assistance, an exercise, and the scenario and analysis in Section 9.3.4; Patricia A. Joseph (Slippery Rock University) for research assistance and
xviii Preface
an exercise; Ellen Kraft (Richard Stockton College) for assisting with research and the revision of Section 7.2; Jean Martinez for lively conversations about privacy, security, and social media; Michelle Matson for conversations about several topics in the book; Jack Revelle for bringing kill switches to my attention and sending me excellent articles; Carol Sanders for reading and improving Chapter 2, finding useful sources, and for many conversations about privacy, security, and social media; Marek A. Suchenek (California State University, Dominguez Hills) for research on software patent history and for email conversations about ethics, intellectual property, and human progress; Sue Smith, Char Glacy, and Michaeleen Trimarchi for their observations about how researchers use the Web; and my birding buddies, who got me out looking at birds once a week instead of at a screen.
I thank the following people for reviewing the third edition at the beginning of this project and providing suggestions for the new edition: Ric Heishman (George Mason University); Starr Suzanne Hiltz (New Jersey Institute of Technology); Jim K. Huggins (Kettering University); Patricia A. Joseph (Slippery Rock University); Tamara Maddox (George Mason University); Robert McIllhenny (California State University, Northridge); Evelyn Lulis (DePaul University); and Marek A. Suchenek (California State University, Dominguez Hills).
This edition includes some material from earlier editions. Thus again, I thank all the people I listed in the prefaces of those editions.
I appreciate the efforts of the staff at Pearson Education who worked on this book: my editor Tracy Johnson, associate editor Carole Snyder, production project manager Kayla Smith-Tarbox, the marketing department, and the people behind the scenes who handle the many tasks that must be done to produce a book. I thank the production team: Paul Anagnostopoulos, Richard Camp, Ted Laux, Jacqui Scarlott, and Priscilla Stevens.
Last but most, I thank Keith Mayers, for assisting with research, managing my software, reading all the chapters, being patient, running errands, finding other things to do while I worked (building a guitar!), and being my sweetheart.
Prologue
Prometheus, according to Greek myth, brought us the gift of fire. It is an awesome gift. It gives us the power to heat our homes, cook our food, and run the machines that make our lives more comfortable, healthy, and enjoyable. It is also awesomely destructive, both by accident and by arson. The Chicago fire in 1871 left 100,000 people homeless. In 1990, the oil fields of Kuwait were intentionally set ablaze. Since the beginning of the 21st century, wildfires in the United States have destroyed millions of acres and thousands of homes. In spite of the risks, in spite of these disasters, few of us would choose to return the gift of fire and live without it. We have learned, gradually, how to use it productively, how to use it safely, and how to respond more effectively to disasters, be they natural, accidental, or intentional.
Computer technology is the most significant new technology since the beginning of the Industrial Revolution. It is awesome technology, with the power to make routine tasks quick, easy, and accurate, to save lives, and to create large amounts of new wealth. It helps us explore space, communicate easily and cheaply, find information, create entertainment, and do thousands of other tasks. As with fire, this power creates powerful problems: potential loss of privacy, multimillion-dollar thefts, and breakdowns of large, complex systems (such as air traffic control systems, communications networks, and banking systems) on which we have come to depend. In this book, we describe some of the remarkable benefits of computer and communication technologies, some of the problems associated with them, and some of the means for reducing the problems and coping with their effects.
This page intentionally left blank
1 Unwrapping the Gift
1.1 The Pace of Change
1.2 Change and Unexpected Developments
1.3 Themes
1.4 Ethics
Exercises
4 Chapter 1 Unwrapping the Gift
1.1 The Pace of Change
In a way not seen since Gutenberg’s printing press that ended the Dark Ages and ignited the Renaissance, the microchip is an epochal technology with unimaginably far-reaching economic, social, and political consequences.
—Michael Rothschild1
In 1804, Meriwether Lewis and William Clark set out on a two-and-a-half-year voyage to explore what is now the western United States. Many more years passed before their journals were published. Later explorers did not know that Lewis and Clark had been there before them. Stephen Ambrose points out in his book about the Lewis and Clark expedition, Undaunted Courage, that information, people, and goods moved no faster than a horse—and this limitation had not changed in thousands of years.2 In 1997, millions of people went to the World Wide Web to watch a robot cart called Sojourner roll across the surface of Mars. We chat with people thousands of miles away, and instantly view Web pages from around the world. We can tweet from airplanes flying more than 500 miles per hour.
Telephones, automobiles, airplanes, radio, household electrical appliances, and many other marvels we take for granted were invented in the late 19th and early 20th centuries. They led to profound changes in how we work and play, how we get information, how we communicate, and how we organize our family lives. Our entry into space was one of the most dramatic feats of technology in the 20th century. Sputnik, the first man-made satellite, launched in 1957. Neil Armstrong walked on the moon in 1969. We still do not have personal spacecraft, vacation trips to the moon, or a large amount of commercial or research activity in space. Space tourism for the very rich is in an early stage. The moon landing has had little direct effect on our daily lives. But computer systems in cars can now apply the brakes if a pedestrian is in the car’s path. Some cars park themselves, and experimental cars drive themselves on city streets. Computer programs beat human experts at chess and Jeopardy!, and our smartphones answer our questions. Surgeons perform surgery with robotic instruments miles from the patient. Roughly five billion people use cellphones; U.S. texters send more than a trillion texts in a year; Facebook has more than 800 million members; Twitter users tweet hundreds of thousands of times a day; and these numbers will be out of date when you read them. A day without using an appliance or device containing a microchip is as rare as a day without turning on an electric light.
The first electronic computers were built in the 1940s. Scientists at Bell Laboratories invented the transistor—a basic component of microprocessors—in 1947. The first hard- disk drive, made by IBM in 1956, weighed more than a ton and stored only five megabytes of data, less than the amount of space we use for one photo. Now, we can walk around
1.1 The Pace of Change 5
with 150 hours of video in a pocket. A disk with a terabyte (one thousand gigabytes, or one trillion bytes) of storage—enough for 250 hours of high definition video—is inexpensive. There are hundreds of billions of gigabytes of information on the Internet. The 1991 space shuttle had a 1-megahertz� computer onboard. Ten years later, some luxury automobiles had 100-megahertz computers. Speeds of several gigahertz are now common. When I started my career as a computer science professor, personal computers had not yet been invented. Computers were large machines in air-conditioned rooms; we typed computer programs onto punched cards. If we wanted to do research, we went to a library, where the library catalog filled racks of trays containing 3 × 5 index cards. Social-networking sites were neighborhood pizza places and bars. The point is not that I am old; it is the speed and magnitude of the changes. The way you use computer systems and mobile devices, personally and professionally, will change substantially in two years, in five, and in ten, and almost unrecognizably over the course of your career. The ubiquity of computers, the rapid pace of change, and their myriad applications and impacts on daily life characterize the last few decades of the 20th century and the beginning of the 21st.
It is not just the technology that changes so fast. Social impacts and controversies morph constantly. With PCs and floppy disks came computer viruses and the beginnings of a huge challenge to the concept of copyright. With email came spam. With increased storage and speed came databases with details about our personal and financial lives. With the Web, browsers, and search engines came easy access by children to pornography, more threats to privacy, and more challenges to copyright. Online commerce brought bargains to consumers, opportunities to entrepreneurs, and identity theft and scams. Cellphones have had so many impacts that we discuss them in more detail later in this chapter and in Chapter 2. With hindsight, it might seem odd that people worried so much about antisocial, anticommunity effects of computers and the early Internet. Now, with the popularity of social networking, texting, and sharing video, photos, and information, the Net is a very social place. In 2008, “experts” worried the Internet would collapse within two years because of the demands of online video. It did not. Privacy threats of concern several years ago seem minor compared to new ones. People worried about how intimidating computers and the Internet were; now toddlers operate apps on tablets and phones. Concerns about technology “haves” and “have-nots” (the “digital divide”) waned as Internet access and cellphones spread throughout the United States and around the world, shrinking the digital divide far faster than long-standing global divides in, say, education and access to fresh water.
Discussions of social issues related to computers often focus on problems, and indeed, throughout this book we examine problems created or intensified by computer technolo- gies. Recognizing the benefits is important too. It is necessary for forming a reasonable, balanced view of the impact and value of the technology. Analyzing and evaluating the
� This is a measure of processing speed. One megahertz is 1 million cycles per second; 1 gigahertz is 1 billion cycles per second. “Hertz” is named for the 19th-century physicist Heinrich Rudolf Hertz.
6 Chapter 1 Unwrapping the Gift
impact of new technologies can be difficult. Some of the changes are obvious. Some are more subtle. Even when benefits are obvious, the costs and side effects might not be, and vice versa. Both the technological advances brought about by computer technology and the extraordinary pace of development have dramatic, sometimes unsettling, im- pacts on people’s lives. To some, this is frightening and disruptive. They see the changes as dehumanizing, reducing the quality of life, or as threats to the status quo and their well- being. Others see challenging and exciting opportunities. To them, the development of the technology is a thrilling and inspiring example of human progress.
When we speak of computers in this book, we include mobile devices such as smartphones and tablets, desktop computers and mainframes, embedded chips that control machines (from sewing machines to oil refineries), entertainment systems (such as video recorders and game machines), and the “Net,” or “cyberspace.” Cyberspace is built of computers (e.g., Web servers), communication devices (wired and wireless), and storage media, but its real meaning is the vast web of communications and information that includes the Internet and more.
In the next section, we look at some phenomena, often unplanned and spontaneous, that computer and communication technology made possible. They have deeply changed how we interact with other people, what we can accomplish, and how others can intrude into our relationships and activities. In the rest of the chapter, we introduce themes that show up often, and we present an introduction to some ethical theories that can help guide our thinking about controversies throughout the rest of the book. The next seven chapters look at ethical, social, and legal issues primarily from the perspective of any person who lives and works in a modern computerized society and is interested in the impact of the technology. The final chapter takes the perspective of someone who works as a computer professional who designs or programs computer systems or as a professional in any area who must make decisions and/or set policy about the use of computer systems. It explores the ethical responsibilities of the professional. The Software Engineering Code of Ethics and Professional Practice and the ACM Code of Ethics and Professional Conduct, in Appendix A, provide guidelines for professionals.
1.2 Change and Unexpected Developments
No one would design a bridge or a large building today without using computers, but the Brooklyn Bridge, built more than 130 years ago—long before computers, is both a work of art and a marvelous feat of engineering. The builders of the Statue of Liberty, the Pyramids, the Roman aqueducts, magnificent cathedrals, and countless other complex structures did not wait for computers. People communicated by letters and telephone before text messages, email, and Twitter. People socialized in person before social-networking sites. Yet we can identify several phenomena resulting from computer
1.2 Change and Unexpected Developments 7
and communication technology that are far different from what preceded them (in degree, if not entirely in kind), several areas where the impacts are dramatic, and many that were unanticipated. In this section, we consider a brief sampling of such phenomena. Some are quite recent. Some are routine parts of our lives now. The point is to remind us that a generation ago they did not exist. They illustrate the amazingly varied uses people find for new tools and technologies.
It is precisely this unique human capacity to transcend the present, to live one’s life by purposes stretching into the future—to live not at the mercy of the world, but as a builder and designer of that world—that is the distinction between human and animal behavior, or between the human being and the machine.
—Betty Friedan3
1.2.1 Connections: Cellphones, Social Networking, and More
The Web, social networking, cellphones, and other electronic devices keep us connected to other people and to information all day, virtually everywhere. We look at a few connectivity applications, focusing on fast changes and unanticipated uses and side effects (good and bad). The discussion suggests issues we study throughout the book.
Cellphones
In the 1990s, relatively few people had cellphones. Business people and sales people who often worked outside their office carried them. High-tech workers and gadget enthusiasts liked them. Others bought the phones so they could make emergency calls if their cars broke down. We were used to being out of touch when away from home or office. We planned ahead and arranged our activities so that we did not need a phone when one was not available. Within a short time, however, cell service improved and prices dropped. Cellphone makers and service providers developed new features and services, adding cameras, video, Web connections, and location detection. Apple introduced the iPhone in 2007, and phones got “smart.” People quickly developed hundreds of thousands of applications and embraced the term app. Consumers downloaded 10 billion apps from Apple’s App Store. Within very few years, people all over the world used phones, rather than PCs or laptops, as their connection to the Internet. Millions, then hundreds of millions, then billions of people started carrying mobile phones. In 2011, there were approximately five billion cellphone subscriptions worldwide—an astoundingly fast spread of a new technology. Writers describe the dramatic changes with observations such as, “A Masai warrior with a smartphone and Google has access to more information than the President did 15 years ago” and “More folks have access to a cellphone than to a toilet.”4
8 Chapter 1 Unwrapping the Gift
Cellphones became a common tool for conversations, messaging, taking pictures, downloading music, checking email, playing games, banking, managing investments, finding a restaurant, tracking friends, watching videos. Smartphones serve as electronic wallets and identification cards at store terminals or security checkpoints. Phones monitor security cameras at home or control home appliances from a distance. Professional people use smartphone apps for a myriad of business tasks. Smartphones with motion detectors remind obese teenagers to get moving. An app analyzes blood glucose levels for diabetics and reminds them when to exercise, take medication, or eat something. Military personnel on the front lines can use specialized apps to download satellite surveillance video and maps. More unanticipated uses include location tracking, sexting, life-saving medical apps, and malicious data-stealing apps. People use cellphones to organize flash mobs for street dances and pillow fights—or for attacking pedestrians and looting stores. Terrorists use cellphones to set off bombs. Apps designed for poor countries inform people when water is available and help perform medical imaging.
These examples suggest the number and variety of unanticipated applications of this one, relatively new “connection” device. The examples also suggest problems. We discuss privacy invasion by data theft and location tracking in Chapter 2. In Chapter 3, we consider whether phone service should be shut down during riots. Is the security of smartphones sufficient for banking and electronic wallets? (What if you lose your phone?) Do people realize that when they synch their phone with other devices, their files become vulnerable at the level of the weakest security?
As a side effect of cellphone use and the sophistication of smartphones, researchers are learning an enormous amount about our behavior. Laws protect the privacy of the content of our conversations, but smartphones log calls and messages and contain devices that detect location, motion, direction, light levels, and other phones nearby. Most owners carry their phones all day. Researchers analyze this trove of sensor data. (Yes, much of it can be stored.) Analysis of the data generates valuable information about traffic congestion, commuting patterns, and the spread of disease. In an example of the latter, by studying movement and communication patterns of MIT students, researchers could detect who had the flu, sometimes before the students knew it themselves. Researchers also can determine which people influence the decisions of others. Advertisers and politicians crave such information. Perhaps the eeriest result is that reseachers who analyzed time and location data from millions of calls said that, with enough data, a mathematical model could predict where someone would be at a particular future time with more than 90% accuracy. Who will have access to that information?5
Rudeness is an issue with cellphones. People use them in inappropriate places, dis- turbing others. The fact that so many people carry small cameras everywhere (mostly in phones, but also hidden in other small objects such as pens�) affects our privacy in public
� At least one company sells a working pen that records high-resolution video.
1.2 Change and Unexpected Developments 9
and nonpublic places.6 How well do people armed with cellphone cameras distinguish news events and evidence of crimes from voyeurism, their own rudeness, and stalking?
Talking on a phone while driving a car increases the risk of an accident. Some states prohibit use of handheld phones while driving (and a lot of drivers ignore the ban). Researchers developed an app that uses motion detection by smartphones to deduce that a phone is in a moving car and block incoming calls. A more sophisticated version locates the phone well enough to block only the driver’s phone, not that of a passenger.
Here is an example of a subtle behavioral change. When people began carrying cellphones and could call for help, more headed out in the wilderness or went rock climbing without appropriate preparation. In many areas of life, people take more risk when technology increases safety. This is not unreasonable if the added risk and increased safety are in balance. When rescue calls surged, some rescue services began billing for the true cost of a rescue—one way to remind people to properly weigh the risk.
Kill switches
Soon after Amazon began selling electronic books for its Kindle ebook readers, the company discovered that a publisher was selling books in Amazon’s online store that it did not have the legal rights to sell in the United States. Amazon deleted the books from its store and from the Kindles of people who had bought them; it refunded their payments. A reasonable and appropriate response? Not to many customers and media observers. Customers were outraged that Amazon deleted books from their Kindles. People were startled to learn that Amazon could do so.� The response was so strong that Amazon announced that it would not remove books from customer Kindles again. Few realized at that time that Apple’s iPhones already had a kill switch—a way for Apple to remotely delete apps from phones. In 2011, when a software developer discovered malicious code in an app for Android phones, Google quickly removed the app from its store and from more than 250,000 phones. Although this was a good example of the purpose of a kill switch and a beneficial use, the fact that Google could do it disturbed people. One of the troubling side effects of our connectivity is that outsiders can reach into our devices and delete our stuff.
Perhaps this extended reach should not have been a surprise. In many businesses, the IT department has access to all desktop computers and can install—or delete— software. Software on personal computers and other electronic devices communicates with businesses and organizations regularly, without our direct command, to check for updates of software, news, and our friends’ activities. When we enable updates of software, a company remotely deletes old versions.
Now, the operating systems for smartphones, tablets, and some computers (e.g., Windows) have kill switches. The companies do not disclose much information about
� Ironically, one of the books Amazon removed was George Orwell’s 1984—a novel about a totalitarian government that regularly sent documents down a “memory hole” to destroy them.
10 Chapter 1 Unwrapping the Gift
them. The main purpose is security—to remove malicious software that the company discovers in an app after users have downloaded it. Indeed, companies such as Google and Apple that provide popular app stores see it as a serious responsibility to protect users from malicious apps. Some companies tell us about their removal capability in their terms of use agreements, but such agreements can run to thousands of words and have vague, general statements. Few people read them.
What are some potential uses and risks? Kill switches could remove content that infringes copyrights. They could remove content that a company or government deems offensive. What if malicious hackers found a way to operate the kill switches on our devices? Governments in many countries have extensive censorship laws and require that communications services provide government access to communications. Governments, in free and unfree countries, pressure businesses to act as the government prefers. For more than 2000 years, governments and religious and social organizations have burned books that displeased them. What pressures might governments put on companies to use the kill switches? Will the impact of electronic kill switches be more devastating than attempts to prohibit printed material? Or will companies use them carefully for improved security? Our new tools are remarkably powerful and remarkably vulnerable.
Social networking
While all this razzle-dazzle connects us electronically, it disconnects us from each other, having us “interfacing” more with computers and TV screens than looking in the face of our fellow human beings. Is this progress?
—Jim Hightower, radio commentator, 19957
Facebook, one of the first of the social networking sites, started at Harvard as an online version of the hardcopy student directories available at many colleges. At first, the sites were wildly popular with young people, while older people did not understand the appeal or worried about safety and privacy. Adults quickly discovered benefits of personal and business social networking. Social networks are enormously popular with hundreds of millions of people because of the ease with which they can share so many aspects of their lives and activities with family, friends, co-workers, and the public.
As with so many other digital phenomena, people found unanticipated uses of so- cial networking, some good, some bad. Friends and ex-boyfriends and ex-girlfriends post pranks and embarrassing material. Stalkers and bullies stalk and bully. Politicians, advertis- ers, businesses, and organizations seek donations, volunteers, customers, and connections. Protesters organize demonstrations and revolutions. Jurors tweet about court cases during trials (causing mistrials, overturned convictions, and jail time for offending jurors). Social networking brought us more threats to privacy and a steady stream of updates on the triv-
1.2 Change and Unexpected Developments 11
ial details of people’s lives. Gradually, social network companies developed sophisticated
Privacy issues for social networks: Section 2.3.2
privacy controls and feedback systems to reduce problems, though they certainly have not eliminated them. Overall, to most people, the bene- fits outweigh the problems, and social networking has become the new
way of communicating. In a phenomenon called “crowd funding,” social networks, Twitter, and other plat-
forms make it easy to raise money in small amounts from a large number of people for charities, political causes, artistic projects, and investment in start-up companies.
How do social networking sites affect people and relationships? People can have hundreds of friends and contacts, but have they traded quality of in-person relationships for quantity of superficial digital relationships? Does the time spent online reduce the time spent on physical activity and staying healthy? It is still too early for definitive answers, but it appears that the many critics who anticipated a serious problem of social isolation were mistaken. Researchers find that people use social networks mostly to keep in touch with friends and family and that the easy, frequent contact enhances relationships, empathy, and a sense of community. On the other hand, young people who spend a lot of time on a social network do poorly in school and have behavioral problems. (Are these people who would have problems in any case? Does the access to the networks exacerbate preexisting emotional problems?)
Just as researchers study social phenomena using the masses of data that smartphone systems collect, they also mine the masses of data in social networks. For example, social scientists and computer scientists analyze billions of connections to find patterns that could help identify terrorist groups.8
A person you follow in social media might not be a person at all. A socialbot is an artificial intelligence program that simulates a human being in social media. Researchers
More about artificial in- telligence: Section 1.2.3
tricked Twitter users into building relationships with artificial tweeting personalities, some of which gained large followings. Political activists launched socialbots to influence voters and legislators. The U.S. mili-
tary raised concerns about automated disinformation campaigns by enemies. Advertising bots are likely to be common. When the Internet was new, someone commented (and many repeated) that “on the Internet, no one knows you’re a dog.” It meant that we could develop relationships with others based on common interests without knowing or caring about age, race, nationality, gender, or physical attractiveness. Some of those others might not even be people, and we might not know it. Should we be comfortable with that?
Communication and the Web
Email and the Web are so much a part of our culture now that we might forget how new and extraordinary they are. Email was first used mostly by computer scientists. In the 1980s, messages were short and contained only text. As more people and businesses connected to computer networks, use of email expanded to science researchers, then to
12 Chapter 1 Unwrapping the Gift
businesses, then to millions of other people. Limits on length disappeared, and we began attaching digitized photos and documents. People worldwide still send several billion emails daily (not counting spam), although texting, tweeting, and other social media have replaced email as the favored communication method in many contexts.9
High-energy physicists established the World Wide Web in Europe in 1990 to share their work with colleagues and researchers in other countries. In the mid- and late 1990s, with the development of Web browsers and search engines, the Web became an environment for ordinary users and for electronic commerce. Today there are billions of Web pages. The Web has grown from an idea to a huge library and news source, a huge shopping mall, an entertainment center, and a multimedia, global forum in less than one generation.
The Web gives us access to information and access to audiences unimaginable a generation ago. It empowers ordinary people to make better decisions about everything from selecting a bicycle to selecting medical treatments. It empowers us to do things that we used to rely on experts to do for us. Software tools, many available for free, help us analyze the healthiness of our diet or plan a budget. We can find references and forms for legal processes. We can read frank reviews of cameras, clothing, cars, books, and other products written by other buyers, not marketing departments. We can select our entertainment and watch it when we want to. We can fight back against powerful institutions by shaming them with videos that go viral� (see, for example, “United Breaks Guitars” on YouTube) or by posting legal documents intended to intimidate us (see, for example, chillingeffects.org). Businesses and organizations use “viral marketing”—that is, relying on large numbers of people to view and spread marketing messages in clever videos. We can start our own Web-based television network without the huge investment and government license requirements of broadcast television networks. A college student with a good idea and some well-implemented software can start a business that quickly grows to be worth millions or billions of dollars; several have. The openness of the Internet enables “innovation without permission,” in the words of Vinton Cerf, one of the key people who has worked on Internet development since it began.10
Blogs (a word made up from “Web log”) and videos are two examples of the many new forms of creativity that flourish because Web technology and special software make them so easy and inexpensive. They began as outlets for amateurs and now are significant sources of news and entertainment. They have created new paths for jobs—with news media, publishers, and advertising and entertainment companies. Of course, some amateur blogs and videos are dull, silly, and poorly written or made, but many are gems, and people find them. People blog on current events, celebrity gossip, hobbies, books, movies, dieting, law, economics, technology, political candidates, Internet issues, and virtually any other topic. They provide varied, sometimes quirky perspectives. The independence of
� “Going viral” describes the phenomenon where something posted in cyberspace catches the attention of people who view, copy, and spread it (or links to it) to millions more people.
1.2 Change and Unexpected Developments 13
“I’ve got pressure”
When asked by a young man to speak more quietly on his cellphone, a Hong Kong bus rider berated the man for nearly six minutes with angry insults and obscenities. In the past, a few other riders might have described the incident to friends, then soon forgotten it. But in this instance, another rider captured the scene on his cellphone. The video soon appeared on the Internet, and millions of people saw it. People provided subtitles in different languages, set the video to music, used clips as mobile-phone ringtones, and
produced t-shirts with pictures and quotes. “I’ve got pressure” and other phrases from the rant slipped into conversations.
This incident reminds us that anything we do in a public place can be captured and pre- served on video. But more, it illustrates how the Internet facilitates and encourages creativ- ity and the quick creation and distribution of culture artifacts and entertainment, with the contribution of ideas, modifications, vari- ations, improvements, and new works from thousands of people.
bloggers attracts readers; it suggests a genuine connection with what ordinary people are thinking and doing, not filtered through major news companies or governments. Businesses were quick to recognize the value of blogs, and many provide their own as part of their public relations and marketing programs. Inexpensive video cameras and video- manipulation tools have powered a burst of short amateur videos—often humorous, sometimes worthless, and sometimes quite serious. We can see a soldier’s view of war, someone’s encounter with aggressive whales, an arrest by police. Video sites also made it easy to post and trade professional videos, infringing copyrights owned by entertainment companies and individuals. We explore copyright issues in Chapter 4.
The Web connects students and teachers. At first, universities offered online courses within their area, benefitting people who work full-time, who have varying work schedules that conflict with normal class schedules, who have small children at home, or who cannot travel easily because of disabilities. Gradually a potential to revolutionize advanced education became clear.� More than 100 million people have viewed the thousands of free lessons on sciences, economics, and other subjects at the online Khan Academy. When two artificial intelligence experts offered a Stanford University graduate course for free online, they expected 500–1000 students to sign up. They got 160,000 people from around the world, and more than 20,000 completed the course, which included automatically graded homework assignments and exams.11
The impact of the connections provided by the Web and cellphones is more dramatic in remote or less developed areas of the world, many of which do not have landline telephones. Mountains and thick jungle, with no roads, separate villagers in one town in
� For elementary education, it appears that regular classes and in-person teachers still have the advantage.
14 Chapter 1 Unwrapping the Gift
Telemedicine
Telemedicine, or long-distance medicine, refers to remote performance of medical exams, analyses, and procedures using specialized equipment and computer networks. On long airplane flights, telemedicine can help treat a sick passenger and ascertain whether the plane needs to make an emergency landing. Prisons use telemedicine to reduce the risk of escape by dangerous criminals. Some small-town hos- pitals use video systems to consult with spe- cialists at large medical centers—eliminating the expense, time, and possible health risk of transporting the patient to the medical center. A variety of health-monitoring devices send
their readings from a patient’s home to a nurse over the Internet. This technology eliminates the expense, time, and inconvenience of more frequent visits, while enabling more regular monitoring of patients and helping to catch dangerous conditions early.
Telemedicine goes well beyond transmis- sion of information. Surgeons in New York used video, robotic devices, and high-speed communication links to remotely remove a gall bladder from a patient in France. Such systems can save lives in emergencies and bring a high level of surgical skills to small communities that have no surgeons.
Malaysia from the next, but the villagers order supplies, check the market price of rice to get a good deal when selling their crop, and email family photos to distant relatives. Farmers in Africa get weather forecasts and instruction in improved farming methods. An Inuit man operates an Internet service provider for a village in the Northwest Territories of Canada, where temperatures drop to −40◦F. Villagers in Nepal sell handicrafts worldwide via a website based in Seattle. Sales have boomed, more villagers have regular work, dying local arts are reviving, and some villagers can now afford to send their children to school.
The Web abounds with examples of collaborative projects, some organized, such as Wikipedia� (the online encyclopedia written by volunteers), some spontaneous. Scientists collaborate on research with scientists in other countries much more easily and more often than they could without the Internet. Informal communities of programmers, scattered around the world, create and maintain free software. Informal, decentralized groups of people help investigate online auction fraud, a murder, stolen research, and other crimes. People who have never met collaborate on creating entertainment.
Some collaborative projects can have dangerous results. To reduce the flow of ille- gal immigrants, a governor of Texas proposed setting up night-vision webcams along the Mexican border that volunteers would monitor on the Internet. Will the people moni- toring a border webcam go out and attack those they see coming across the border? What training or selection process is appropriate for volunteers who monitor these security cam- eras? In China, a man posted the online name of another man he believed was having
� A wiki is a website, supported by special software, that allows people to add content and edit content that others provide. Wikis are tools for collaborative projects within a business or organization or among the public.
1.2 Change and Unexpected Developments 15
an affair with his wife. Thousands of people participated in tracking down the man’s real name and address and encouraging public action against him. Thousands of Twitterers in Saudi Arabia called for the execution of a young writer who they believed insulted the Prophet Muhammad. Mobs and individuals emotionally involved in a political, religious, or moral cause do not always pause for the details of due process. They do not carefully determine whether they identified the correct person, whether the person is guilty of a crime, and what the appropriate punishment is. On the other hand, police departments in cities in several countries effectively use instant messaging to alert residents who help find crime suspects or stolen cars in their neighborhoods. Enlisting volunteers is a use- ful new collaborative tool for crime fighting and possibly antiterrorism programs. How can we guide the efforts of thousands of individuals toward useful ends while protecting against mistakes, instant vigilantism, and other abuses?
1.2.2 E-commerce and Free Stuff
In the 1990s, the idea of commercial websites horrified Web users. The Web, they believed, was for research, information, and online communities. A few brick-and-mortar businesses and a few young entrepreneurs recognized the potential and benefits of online commerce. Among the earliest traditional businesses on the Web, United Parcel Service and Federal Express let customers check the status of packages they sent. This was both a novelty and a helpful service. Amazon.com, founded in 1994, started selling books on the Web and became one of the most popular, reliable, and user-friendly commercial sites. Many, many Web-based businesses followed Amazon, creating new business models— such as eBay with its online auctions. Traditional businesses established websites. Online sales in the United States now total hundreds of billions of dollars a year. The Web changed from a mostly academic community to a world market in little more than a decade.
Some of the benefits of e-commerce are fairly obvious: we can consider more products and sellers, some far away, in less time and without burning gasoline. Some benefits are less obvious or were not obvious before they appeared. Auction sites gave people access to customers they could not have found efficiently before. The lower overhead and the ease of comparison shopping on the Web brought down prices of a variety of products. Consumers save 10–40%, for example, by buying contact lenses online, according to a Progressive Policy Institute report. Consumers who do price-comparison research on the Web before buying a new car typically save about $400.12 Small businesses and individual artists sell on the Web without paying big fees to middlemen and distributors. The Web enabled a peer-to-peer economy with websites where ordinary people sell or trade their skills, make small loans, and trade their homes for vacations.
Growth of commerce on the Web required solutions to several problems. One was trust. People were reluctant to give their credit card numbers on the Web to companies they had not dealt with or even heard of before. Enter PayPal, a company built on the idea of having a trusted intermediary handle payments. Encryption and secure servers also
16 Chapter 1 Unwrapping the Gift
made payments safer.� The Better Business Bureau established a website where we can find out if consumers have complained about a company. Auction sites implemented rating
Impacts of e-commerce on free speech: Section 3.2.5
and comment systems to help buyers and sellers determine whom to trust. Email confirmations of orders, consumer-friendly return policies, and easy packaging for returns all contributed to consumer comfort
and more online sales. The University of Michigan’s National Quality Research Center found that e-commerce businesses had a higher customer-satisfaction rating than any other sector of the economy. As online sales increased, competition led traditional stores to adopt some of the practices of e-commerce, such as consumer-friendly return policies.
Free stuff
Libraries have provided free access to books, newspapers, and journals for generations, and radio and television provided free news and entertainment before the Internet. But there is so much more free stuff now—a truly astounding amount—conveniently available on the Web.
For our computers, we can get free email programs and email accounts, browsers, filters, firewalls, encryption software, word processors, spreadsheets, software for viewing documents, software to manipulate photos and video, home inventory software, antispam software, antivirus software, antispyware software, and software for many other specialized purposes. This is a small sampling of software available for free.
We can find free game-playing programs for old board games and card games such as chess and bridge, as well as for new games. Phone service via Skype is free. There are free dating services on the Web. Major music festivals offer their concerts for free on the Internet, a nice alternative to paying $30 to $500 for a ticket. Craigslist, the classified ad site, one of the most popular websites in the world, is free to people who place ads and people who read them. Major (expensive) universities such as Stanford, Yale, and MIT provide video of lectures, lecture notes, and exams for thousands of their courses on the Web for free. We can download whole books from Google, Project Gutenberg, and other sources for free.† We can read news from all over the world for free. We can store our personal photographs, videos, and other files online for free. MySpace, Facebook, Twitter, LinkedIn, and YouTube are free; Google, Bing, and Yahoo are free. Specialized, scholarly encyclopedias (e.g., the Stanford Encyclopedia of Philosophy), Wikipedia, and hundreds of other references are free.
We pay for libraries with taxes. Advertisers pay for broadcasting radio and televi- sion programs. On the Web, advertising pays for many, many free sites and services, but far from all. Wikipedia carries no advertising—donations pay for its hardware and band-
� The ease and security of payment on the Web had a pleasant side effect: Many people contribute more to charitable organizations. That had the unpleasant side effect of spawning scam charity sites. † Books available for free downloading are in the public domain (that is, out of copyright).
1.2 Change and Unexpected Developments 17
width. Craigslist charges fees of some businesses that post job announcements and brokers who post apartment listings in a few cities. That keeps the site free to everyone else and free of other paid ads. Businesses provide some free information and services for good public relations and as a marketing tool. (Some free programs and services do not have all the features of the paid versions.) Nonprofit organizations provide information as a public service; donations or grants fund them. One of the distinct and delightful features of the Internet is that individuals provide a huge amount of free stuff simply because it pleases them to do so. They are professionals or hobbyists or just ordinary people who enjoy sharing their expertise and enthusiasm. Generosity and public service flourish in the Web environment.
It is often obvious when we are viewing advertisements on websites or phones. Ads annoy some people, but they are not insidious, and their presence on a screen is not an unreasonable price to pay for free services. However, to earn ad revenue to fund multimillion-dollar services, many free sites collect information about our online activities and sell it to advertisers. This tracking is often not obvious; we consider it in Chapter 2.
1.2.3 Artificial Intelligence, Robotics, Sensors, and Motion
Artificial intelligence
Artificial intelligence (AI) is a branch of computer science that makes computers perform tasks we normally (or used to) think of as requiring human intelligence. It includes play- ing complex strategy games such as chess, language translation, making decisions based on large amounts of data (such as approving loan applications), and understanding speech (where the appropriateness of the response might be the measure of “understanding”). AI also includes tasks performed automatically by the human brain and nervous system— for example, vision (the capture and interpretation of images by cameras and software). Learning is a characteristic of many AI programs. That is, the output of the program improves over time as it “learns” by evaluating results of its decisions on the inputs it encounters. Many AI applications involve pattern recognition, that is, recognizing simi- larities among different things. Applications include reading handwriting (for automatic sorting of mail and input on tablet computers, for example), matching fingerprints, and matching faces in photos.
Early in the development of AI, researchers thought the hard problems for computers were tasks that required high intelligence and advanced training for humans, such as winning at chess and doing mathematical proofs. In 1997, IBM’s chess computer, Deep Blue, beat World Champion Garry Kasparov in a tournament. AI researchers realized that narrow, specialized skills were easier for computers than what a five-year-old does: recognize people, carry on a conversation, respond intelligently to the environment. In 2011, another specially designed computer system called Watson (also built by IBM) defeated human Jeopardy! champions by answering questions more quickly than the humans. Watson processes language (including puns, analogies, and so on) and general
18 Chapter 1 Unwrapping the Gift
knowledge. It searches and analyzes 200 million pages of information in less than three seconds. Practical applications of the Watson technology include medical diagnosis and various business decision-making applications.
We briefly describe a few more examples of AI applications. They were astonishing advances not long ago.
When a man had a heart attack in a swimming pool in Germany, lifeguards did not see him sink to the bottom of the pool. An underwater surveillance system, using cameras and sophisticated software, detected him and alerted the lifeguards who rescued him. The software distinguishes a swimmer in distress from normal swimming, shadows, and reflections. It is now installed in many large pools in Europe and the United States. Just as AI software can distinguish a swimmer in trouble from other swimmers, AI software in video surveillance systems can distinguish suspicious behavior by a customer in a store that might indicate shoplifting or other crimes. Thus, without constant human monitoring, the AI-equipped video system can help prevent a crime, rather than simply identify the culprits afterwards.
Search engines use AI techniques to select search results. They figure out what the user meant if the search phrase contains typos, and they use context to determine the intended meaning of words that have multiple meanings. Automated websites that answer questions use AI to figure out what a question means and find answers.
Speech recognition, once a difficult research area, is now a common tool for hundreds of applications. Computer programs that teach foreign languages give instruction in correct pronunciation if they do not recognize what the user says. Millions of people who carry Apple smartphones can ask questions of Siri, Apple’s “intelligent” personal assistant. Siri interprets our questions and searches the Web for answers. Air traffic controllers train in a mockup tower whose “windows” are computer screens. The trainee directs simulated air traffic. The computer system responds when the trainee speaks to the simulated pilots. Such simulation allows more intensive training in a safe environment. If the trainee mistakenly directs two airplanes to land on the same runway at the same time, no one gets hurt.
People continue to debate the philosophical nature and social implications of artificial intelligence. What does it mean for a computer system to be intelligent? Alan Turing, who developed fundamental concepts underlying computer science before there were computers, proposed a test, now called the Turing Test, for human-level intelligence. Let a person converse (over a network) with the system on any topics the person chooses. If the computer convinces the person that it is human, the computer passes the test. Is that enough? Many technologists think so (assuming the actual test is well designed). But is the computer intelligent? Philosopher John Searle argues that computers are not and cannot be intelligent. They do not think; they manipulate symbols. They do so at very high speed, and they can store (or access) and manipulate a huge quantity of data, but they are not conscious. They do not understand; they simulate understanding. Searle uses the following example to illustrate the difference: Suppose you do not know the
1.2 Change and Unexpected Developments 19
Chinese language. You are in a room with lots of boxes of Chinese symbols and a large instruction book written in English. People submit to you sequences of Chinese symbols. The instructions tell you how to manipulate the symbols you are given and the ones in the boxes to produce a new sequence of symbols to give back. You are very careful, and you do not get bored; you follow the instructions in the book exactly. Unknown to you, the sequences you receive are questions in Chinese. The sequences that you give back by following the instructions (just as a computer follows the instructions of a program) are the correct answers in Chinese. Everyone outside the room thinks you understand Chinese very well. Do you? Searle might say that although Watson won at Jeopardy! , Watson does not know it won.13
Whether we characterize machines as intelligent, or use the word metaphorically, or say that machines simulate intelligence, advances in AI are continuing at a very fast pace. It took IBM several years and millions of dollars to build Watson.14 Technologist Ray Kurzweil thinks personal computers will have the power of Watson within 10 years.
The goal of 17th- and 18th-century calculators was modest: to automate basic arith- metic operations. It shocked people at the time. That a mindless machine could perform tasks associated with human intellectual abilities was disconcerting. Centuries later, Garry Kasparov’s loss to a computer chess program generated worried articles about the value— or loss of value—of human intelligence. Watson generated more. So far, it seems that each new AI breakthrough is met with concern and fear at first. A few years later, we take it for granted.How will we react when Jeopardy! is oh, so trivial that anyone can do well
Implications of human- level AI: Section 7.4.3
at it? How will we react when we can go into a hospital for surgery performed entirely by a machine? Will it be scarier than riding in the first automatic elevators or airplanes? How will we react when we can
have a conversation over the Net about any topic at all—and not know if we are convers- ing with a human or a machine? How will we react when chips implanted in our brains enhance our memory with gigabytes of data and a search engine? Will we still be human?
Robots
Robots are mechanical devices that perform physical tasks traditionally done by humans or tasks that we think of as human-like activities. Robotic machines have been assembling products in factories for decades. They work faster and more accurately than people can. Computer software with artificial intelligence controls most robotic devices now. Robotic milking machines milk hundreds of thousands of cows at dairy farms while the farmhands sleep or do other chores. Some robots dance, and some make facial expressions to convey emotions. However, just as general intelligence is a hard problem for AI, general movement and functioning is a hard problem for robots. Most robotic devices are special- purpose devices with a relatively limited set of operations.
McDonald’s and other fast-food sellers use robotic food preparation systems to reduce costs and speed service. A robot pharmacist machine, connected to a patient database,
20 Chapter 1 Unwrapping the Gift
plucks the appropriate medications from pharmacy shelves by reading bar codes, checks for drug interactions, and handles billing. One of its main goals is reduction of human error. Robots deliver medications and carry linens in hospitals. They navigate around obstacles and “push” elevator buttons with wireless signals. Physicians do complex and delicate surgery from a console with a 3-D monitor and joysticks that control robotic instruments. The software filters out a physician’s shaky movements. Robots work in environments that are hazardous to people. They inspect undersea structures and commu- nication cables. They search for survivors in buildings collapsed by bombs or earthquakes. They explore volcanoes and other planets. They move or process nuclear and other haz- ardous wastes.
For several years, Sony sold a robot pet dog, Aibo. It walked (with a camera system providing vision). It responded to commands, and it learned. Several companies make robots with a more-or-less human shape. Honda’s Asimo, for example, walks up and down stairs. Various companies and researchers are developing robots with more general abilities. One goal is to develop robots that can act intelligently and perform a variety of operations to assist people. Robots (doglike or humanlike) can serve as companions to elderly people. Is an emotional connection with a machine dehumanizing, or is it an improvement over living alone or in a nursing home where the staff cannot provide regular companionship? Will knowing that Grandma has a robot companion ease the guilt of family members and lead them to visit less often? Will we come to view robot companions as positively as pets?
Smart sensors, motion, and control
How do robots walk, climb stairs, and dance? Tiny motion-sensing and gravity-sensing devices collect status data. Complex software interprets the data and determines the necessary motions, and then sends signals to motors. These devices—accelerometers, or mems (for microelectromechanical systems)—help robots, and Segway’s motorized scooters, stay upright.
A sharp price drop for mems triggered a burst of applications.15 They provide image stabilization in digital cameras. They detect when a car has crashed, when someone has dropped a laptop, or when an elderly person has fallen. (In those applications, the system deploys an airbag, triggers a lock on the disk drive to reduce damage, or calls for help.) The Wii game console, whose controller detects the user’s motion, and motion detectors in smartphones brought motion-sensing applications to millions of consumers.
Tiny microprocessors with sensors and radio transmitters (sometimes called smart dust, though they are still larger than dust particles) are finding all sorts of applications. Some are in use; some are in development. We mention a few examples. These examples have many obvious benefits. What are some potential problems?
Oil refineries and fuel storage systems uses thousands of sensors to detect leaks and other malfunctions. Sandia National Laboratory developed a “chemical lab on a chip” that can detect emissions from automobiles, chemical leaks, dangerous gases in fires (reducing
1.2 Change and Unexpected Developments 21
risk for firefighters), and many other hazards. Similar chips could detect chemical warfare agents.
Sensors detect temperature, acceleration, and stress in materials (such as airplane parts). Sensors distributed throughout buildings and bridges can detect structural prob- lems, report on damage from earthquakes, and so on. These applications increase safety while reducing maintenance costs.
Sensors in agricultural fields report on moisture, acidity, and so on, helping farmers to avoid waste and to use no more fertilizer than needed. Sensors could detect molds or insects that might destroy crops. Sensors implanted in chickens monitor the birds’ body temperature. A computer automatically reduces the temperature in the chicken coop if the birds get too hot, thus reducing disease and death from overheating. Sensors in food products monitor temperature, humidity, and other factors to detect potential health problems while the food is in transit to stores.
What will be the impact of tiny flying sensor/computers that communicate wirelessly and which the military can deploy to monitor movement of equipment and people, or with which police or criminals can spy on us in our homes and public places?
A Microsoft researcher developed a system with which a user manipulates 3-D images with hand movements, without touching a screen or any controls. Designers of buildings, machines, clothing, and so on, could use it to examine designs before implementing them. Someone with dirty (or sterile) hands (e.g., mechanics, cooks, surgeons) could examine reference materials while working. What other applications will people think of?
Sensors in baby clothes detect when a baby is sleeping face down, at risk for Sudden Infant Death Syndrome, and warn parents on their cellphone. A heart monitor in a firefighter’s shirt alerts supervisors if the firefighter is too stressed and needs a break. Trainers plan to use sensors in special clothing to better train athletes. What other applications will we find for wearware?
Already we implant or attach microprocessor-controlled devices in or on human bodies: heart pacemakers and defibrillators and devices that restore motion to paralyzed people (which we describe in Section 1.2.4). These will likely see modifications that enhance performance for healthy people. At first it might be physical performance for athletes—for example, to help a competitive swimmer swim more smoothly. Then what? Biological sciences and computer sciences will combine in new ways.
1.2.4 Tools for Disabled People
One of the most heartwarming applications of computer technology is the restoration of abilities, productivity, and independence to people with physical disabilities.
Some computer-based devices assist disabled people in using ordinary computer ap- plications that other people use, such as Web browsers and word processors. Some enable disabled people to control household and workplace appliances that most of us oper- ate by hand. Some improve mobility. Some technologies that are primarily conveniences
22 Chapter 1 Unwrapping the Gift
for most of us provide significantly more benefit for disabled people: consider that text messaging was very popular among deaf people before it was popular with the general population.
For people who are blind, computers equipped with speech synthesizers read aloud what a sighted person sees on the screen. They read information embedded in Web pages that sighted visitors do not need, for example, descriptions of images. Google offers search tools that rank websites based on how accessible they are for blind users. For materials not in electronic form, a scanner or camera, optical-character-recognition software, and a speech synthesizer combine to read aloud to a blind person. The first such readers were large machines. Now, handheld versions can read menus, bills, and receipts in restaurants, as well as magazines and mail at home. Where noise is a problem (or for a person both blind and deaf ), a grid of buttons raised and lowered by the computer to form Braille characters can replace speech output. Braille printers provide hard copy. (Books have long been available in Braille or on tape, but the expense of production for a small market kept the selection limited.) Systems similar to navigation systems in cars help blind people walk around and find their way in unfamiliar neighborhoods.
Prosthetic devices, such as artificial arms and legs, have improved from heavy, “dumb” wood, to lighter materials with analog motors, and now to highly sensitive and flexible digitally controlled devices that enable amputees to participate in sports and fly airplanes. A person whose leg was amputated above the knee can walk, sit, and climb stairs with an artificial “smart” knee. Sensors attached to the natural leg measure pressure and motion more than a thousand times a second and transmit the data to a processor in the prosthetic leg. Artificial intelligence software recognizes and adapts to changes in speed and slope and the person’s walking style. The processor controls motors to bend and straighten the knee and support the body’s movement, replacing the normal complex interplay of nerves, muscles, tendons, and ligaments. Artificial arms use electrodes to pick up tiny electrical fields generated by contractions of muscles in the upper (natural) limb. Microprocessors control tiny motors that move the artificial limb, open and close fingers, and so on. For people with paralyzed legs or for others who cannot use an artificial leg, there are wheelchairs that climb stairs and support and transport a person in an upright position. In 2012, Exso Bionics sold its first exoskeleton, a device with sensors and tiny motors that straps to a person with paralyzed legs and enables the person to walk.16
Various conditions—loss of limbs, quadriplegia (paralysis in both arms and legs, often resulting from an accident), and certain diseases—eliminate all or almost all use of the hands. Speech recognition systems are an extremely valuable tool for these people and for others. (Deaf people can use speech-recognition systems to “hear” another speaker as the computer displays the spoken words on a screen.) People who cannot use their hands can dictate documents to a word processor and give commands to a computer to control household appliances.
To restore control and motion to people paralyzed by spinal injuries, researchers are experimenting with chips that convert brain signals to controls for leg and arm muscles.
1.3 Themes 23
Researchers in the United States and Europe are developing brain–computer interfaces so that severely handicapped people can operate a computer and control appliances with their thoughts.17
The impact of all these devices on the morale of the user is immense. Think about a person with an active mind, personality, and sense of humor—but who cannot write, type, or speak. Imagine the difference when the person gains the ability to communicate—with family and friends, and with all the people and resources available on the Internet.
1.3 Themes
Several themes and approaches to analysis of issues appear through this book. I introduce a few here.
Old problems in a new context
Cyberspace has many of the problems, annoyances, and controversies of noncyber life, among them crime, pornography, violent fiction and games, advertising, copyright infringement, gambling, and products that do not work right.
Throughout this book, I often draw analogies from other technologies and other aspects of life. Sometimes we can find a helpful perspective for analysis and even ideas for solutions to new problems by looking at older technologies and established legal and social principles. The emphasis on the fact that similar problems occur in other areas is not meant to excuse the new problems. It suggests, however, that the root is not always the new technology but can be human nature, ethics, politics, or other factors. We will often try to analyze how the technology changes the context and the impact of old problems.
Adapting to new technology
Changes in technology usually require adaptive changes in laws, social institutions, business policies, and personal skills, attitudes, and behavior.
When cellphones first came with built-in cameras, privacy laws in Pennsylvania (and elsewhere) were not sufficient to convict a man who used his cellphone to take a photo up a woman’s skirt. (The man was found guilty of disorderly conduct.) A federal regulation requiring medical x-rays on film, rather than digital formats, was still in effect in 2011. During Japanese election campaigns in 2005, candidates were afraid to use email and blogs and to update their websites to communicate with voters, because a 1955 law that specifies the legal means of communicating with voters does not, of course, include these methods. It allows postcards and pamphlets.
We might naturally think some actions are criminal, and some should be legal, but legislators did not consider them when writing existing laws. The legal status of an action might be the opposite of what we expect, or it might be uncertain. Many new activities
24 Chapter 1 Unwrapping the Gift
that new technology makes possible are so different from prior ways of doing things that we need a new set of “rules of the game.”
We have to relearn standards for deciding when to trust what we read. The major impact of computer technology on privacy means we have to think in new ways about how to protect ourselves. We have to decide when privacy is important and when we are willing to put it at risk for some other benefit.
Varied sources of solutions to problems
Solutions for problems that result from new technology come from more or improved technology, the market, management policies, education and public awareness, vol- unteer efforts, and law.
The cycle of problems and solutions, more problems and more solutions, is a natural part of change and of life in general. Throughout this book, when we consider problems, we consider solutions from several categories. Technical solutions include hardware and software. “Hardware” might mean something other than part of a computer system; improved lighting near ATMs to reduce robberies is a hardware solution. Authentication technology helps reduce identity theft. Market mechanisms, such as competition and consumer demand, generate many improvements. We all must become educated about the risks of the high-tech tools we use and learn how to use them safely. Legal solutions include effective law enforcement, criminal penalties, lawsuits, legislation, and regulation. For example, there must be appropriate penalties for people who commit fraud online, and there must be appropriate liability laws for cases where system failures occur.
The global reach of the Net
The ease of communication with distant countries has profound social, economic, and political effects—some beneficial, some not.
The Net makes information and opportunities more easily available to people isolated by geography or by political system. It makes crime fighting and law enforcement more difficult, because criminals can steal and disrupt services from outside the victim’s country. Laws in one country prohibiting certain content on the Web or certain kinds of Web services restrict people and businesses in other countries because the Web is accessible worldwide.
Trade-offs and controversy
Increasing privacy and security often means reducing convenience. Protecting privacy makes law enforcement more difficult. Unpleasant, offensive, or inaccurate informa- tion accompanies our access to the Web’s vast amounts of useful information.
Some of the topics we discuss are not particularly controversial. We will sometimes address an issue more as a problem-solving exercise than as a controversy. We will look at the
1.3 Themes 25
impact of electronic technology in a particular area, observe some problems that result, and describe solutions. On the other hand, many of the issues are controversial: leaking confidential information on the Internet, proper policies for privacy protection, how strict copyright law should be, offshoring of jobs, the impact of computers on quality of life.
We consider various viewpoints and arguments. Even if you have a strong position on one side of a controversy, it is important to know the arguments on the other side, for several reasons. Knowing that there are reasonable arguments for a different point of view, even if you do not think they are strong enough to win overall, helps make a debate more civilized. We see that the people on the other side are not necessarily evil, stupid, or ignorant; they may just put more weight on different factors. To convince others of your own viewpoint, you must counter the strongest arguments of the other side, so, of course, you first must know and understand them. Finally, you might change your own mind after considering arguments you had not thought of before.
Perfection is a direction, not an option.
In general, when evaluating new technologies and applications, we should not compare them to some ideal of perfect service or zero side effects and zero risk. That is impossible to achieve in most aspects of life. Instead, we should compare them to the alternatives and weigh the problems against the benefits. The ideal shows us the direction to go as we endeavor to seek improvements and solutions to problems.
Another reason that we cannot expect perfection is that we all have different ideas of what perfection is.
This does not excuse sloppiness. It is possible to meet extremely high standards.
Differences between personal choices, business policies, and law
The criteria for making personal choices, for making policies for businesses and organizations, and for writing laws are fundamentally different.
We can make a personal choice—for example, about what social networks to join, what apps to put on our phones, or what ebooks to buy—according to our individual values and situation. A business bases its policies on many factors, including the manager’s perception of consumer preferences, what competitors are doing, responsibilities to stockholders, the ethics of the business owners or managers, and relevant laws.
Laws are fundamentally different from personal choices and organizational policies because they impose decisions by force on people who did not make them. Arguments for passing a law should be qualitatively different from reasons for adopting a personal or organizational policy. It might seem odd at first, but arguments on the merits of the proposal—for example, that it is a good idea, or is efficient, or is good for business, or is helpful to consumers—are not good arguments for a law. We can use these arguments to try to convince a person or organization to adopt a particular policy voluntarily. Arguments for a law must show why the decision should be enforced against someone
26 Chapter 1 Unwrapping the Gift
who does not agree that it is a good idea. It is better to base laws on the notion of rights rather than on personal views about their benefits or how we want people to behave.
1.4 Ethics
Honesty is the best policy.
—English proverb, pre-1600
1.4.1 What Is Ethics, Anyway?
Sometimes, we discuss issues and problems related to computer technology from a somewhat detached perspective. We see how a new technology can create new risks and how social and legal institutions continually adapt. But technology is not an immutable force, outside of human control. People make decisions about what technologies and products to develop and how to use them. People make decisions about when a product is safe to release. People make decisions about access to and use of personal information. People make laws and set rules and standards.
Should you download movies from unauthorized websites? Should you talk on your cellphone while driving on a freeway? Should you hire foreign programmers who work at low salaries? Should you warn potential customers that the smartphone app you sell needs to copy their contact list? Should you fire an employee who is criticizing your business in social media? What information should you allow advertisers and other trackers to collect from visitors to the website you run? Someone sent you the contents of a friend’s (a teacher’s, a city council candidate’s) email account; should you post it on the Web? In these examples, you are confronting practical and legal issues—and ethical ones. In each case you can restate the problem as a question in the form “Is it right to . . . ?” Is it right to make a significant change in your company’s privacy policy without giving customers or members advance notice?
In this section, we introduce several ethical theories. We discuss some distinctions (e.g., between ethics and law) that are important to understand when tackling ethical issues.
Ethics is the study of what it means to “do the right thing.” It is a complex subject that has occupied philosophers for thousands of years. This presentation is necessarily simplified.
Ethical theory assumes that people are rational and make free choices. Neither of these conditions is always and absolutely true. People act emotionally, and they make mistakes. A person is not making a free choice when someone else is pointing a gun at him. Some argue that a person is not making a free choice in a situation where she might lose a job. However, free choice and use of rational judgment are capacities and characteristics of
1.4 E
thics 27
28 Chapter 1 Unwrapping the Gift
human beings, and they are reasonably assumed as the basis of ethical theory. We take the view that the individual is, in most circumstances, responsible for his or her actions.
Ethical rules are rules to follow in our interactions with other people and in our actions that affect other people. Most ethical theories attempt to achieve the same goal: to enhance human dignity, peace, happiness, and well-being. Ethical rules apply to all of us and are intended to achieve good results for people in general, and for situations in general—not just for ourselves, not just for one situation. A set of rules that does this well respects the fact that we are each unique and have our own values and goals, that we have judgment and will, and that we act according to our judgment to achieve our goals. The rules should clarify our obligations and responsibilities—and our areas of choice and personal preference.�
We could view ethical rules as fundamental and universal, like laws of science. Or we could view them as rules we make up, like the rules of baseball, to provide a framework in which to interact with other people in a peaceful, productive way. The titles of two books illustrate these different viewpoints. One is Ethics: Discovering Right and Wrong ; the other is Ethics: Inventing Right and Wrong .18 We do not have to decide which view is correct to find good ethical rules. In either case, our tools include reason, introspection, and knowledge of human nature, values, and behavior.
Behaving ethically, in a personal or professional sphere, is usually not a burden. Most of the time we are honest, we keep our promises, we do not steal, we do our jobs. This should not be surprising. If ethical rules are good ones, they work for people. That is, they make our lives better. Behaving ethically is usually practical. Honesty makes interactions among people work more smoothly and reliably, for example. We might lose friends if we often lie or break promises. Social institutions encourage us to do right: We might land in jail if caught stealing. We might lose our jobs if we do them carelessly. In a professional context, doing good ethically often corresponds closely with doing a good job in the sense of professional quality and competence. Doing good ethically often corresponds closely with good business in the sense that ethically developed products and ethical policies are more likely to please consumers. Sometimes, however, it is difficult to do the right thing. It takes courage in situations where we could suffer negative consequences. Courage is often associated with heroic acts, where one risks one’s life to save someone in a dangerous situation—the kind of act that makes news. Most of us do not have those opportunities to display courage, but we do have many opportunities in day-to-day life.
1.4.2 A Variety of Ethical Views19
Although there is much agreement about general ethical rules, there are many different theories about how to establish a firm justification for the rules and how to decide what is
� Not all ethical theories fit this description. Ethical relativism and some types of ethical egoism do not. In this book, however, we assume these goals and requirements for ethical theories.
1.4 Ethics 29
ethical in specific cases. We give very brief descriptions of a few approaches to ethics. Some ethicists� make a distinction between ethical theories that view certain acts as good or bad because of some intrinsic aspect of the action and ethical theories that view acts as good or bad because of their consequences. They call these deontological (or nonconsequentialist) and consequentialist theories, respectively. The distinction is perhaps emphasized more than necessary. If the criteria that deontologists use to determine the intrinsic goodness or badness of an act do not consider its consequences for people—at least for most people, most of the time—their criteria would seem to have little ethical merit.
Deontological theories
Deontologists tend to emphasize duty and absolute rules, to be followed whether they lead to good or ill consequences in particular cases. One example is: Do not lie. An act is ethical if it complies with ethical rules and you chose it for that reason.
Immanuel Kant, the philosopher often presented as the prime example of a deon- tologist, contributed many important ideas to ethical theory. We mention three of them here. One is the principle of universality: We should follow rules of behavior that we can universally apply to everyone. This principle is so fundamental to ethical theory that we already accepted it in our explanation of ethics.
Second, deontologists argue that logic or reason determines rules of ethical behavior, that actions are intrinsically good because they follow from logic. Kant believed that rationality is the standard for what is good. We can reason about what makes sense and act accordingly, or we can act irrationally, which is evil. The view that something is evil because it is illogical might seem unconvincing, but Kant’s instruction to “Respect the reason in you”—that is, to use your reason, rationality, and judgment, rather than emotions, when making a decision in an ethical context—is a wise one.
Third, Kant stated a principle about interacting with other people: One must never treat people as merely means to ends, but rather as ends in themselves.
Kant took an extreme position on the absolutism of ethical rules. He argued, for instance, that it is always wrong to lie. For example, if a person is looking for someone he intends to murder, and he asks you where the intended victim is, it is wrong for you to lie to protect the victim. Most people would agree that there are cases in which even very good, universal rules should be broken—because of the consequences.
Utilitarianism
Utilitarianism is the main example of a consequentialist theory. Its guiding principle, as expressed by John Stuart Mill,20 is to increase happiness, or “utility.” A person’s utility is what satisfies the person’s needs and values. An action might decrease utility for some people and increase it for others. We should consider the consequences—the benefits and
� Ethicists are philosophers (and others) who study ethics.
30 Chapter 1 Unwrapping the Gift
damages to all affected people—and “calculate” the change in aggregate utility. An act is right if it tends to increase aggregate utility and wrong if it tends to decrease it.
Utilitarianism is a very influential theory, and it has many variations. As stated above, the utilitarian principle applies to individual actions. For each action, we consider the impact on utility and judge the action by its net impact. This is sometimes called “act utilitarianism.” One variant of utilitarianism, called “rule utilitarianism,” applies the utility principle not to individual actions but to general ethical rules. Thus, a rule utilitarian might argue that the rule “Do not lie” will increase total utility, and for that reason it is a good rule. Rule utilitarians do not do a utility calculation for each instance where they consider lying. Generally, a utilitarian would be more comfortable than a deontologist breaking a rule in circumstances where doing so would have good consequences.
There are numerous problems with act utilitarianism. It might be difficult or impos- sible to determine all the consequences of an act. If we can do so, do we increase what we believe will, or should, contribute to the happiness of the people affected, or what they choose themselves? How do we know what they would choose? How do we quan- tify happiness in order to make comparisons among many people? Should some people’s utility carry more weight than others’? Should we weigh a thief ’s gain of utility equal to the victim’s loss? Is a dollar worth the same to a person who worked for it and a person who received it as a gift? Or to a rich person and a poor person? How can we measure the utility of freedom?
A more fundamental (and ethical) objection to act utilitarianism is that it does not recognize or respect individual rights. It has no absolute prohibitions and so could allow actions that many people consider always wrong. For example, if there is a convincing case that killing one innocent person (perhaps to distribute his or her organs to several people who will die without transplants) or taking all of a person’s property and redistributing it to other community members would maximize utility in a community, utilitarianism could justify these acts. A person has no protected domain of freedom.
Rule utilitarianism suffers far less than does act utilitarianism from these problems. Recognizing that widespread killing and stealing decrease the security and happiness of all, a rule utilitarian can derive rules against these acts. We can state these particular rules in terms of rights to life and property.
Natural rights
Suppose we wish to treat people as ends rather than merely means and we wish to increase people’s happiness. These goals are somewhat vague and open to many interpretations in specific circumstances. One approach we might follow is to let people make their own decisions. That is, we try to define a sphere of freedom in which people can act freely according to their own judgment, without coercive interference by others, even others (including us) who think they are doing what is best for the people involved or for humanity in general. This approach views ethical behavior as acting in such a way
1.4 Ethics 31
that respects a set of fundamental rights of others, including the rights to life, liberty, and property.
These rights are sometimes called natural rights because, in the opinion of some philosophers, they come from nature or we can derive them from the nature of humanity. John Locke21 argued that we each have an exclusive right to ourselves, our labor, and to what we produce with our labor. Thus, he argued for a natural right to property that we create or obtain by mixing our labor with natural resources. He saw protection of private property as a moral rule. If there is no protection for property, then the person who invents a new tool would be loathe to show it to others or use it in their view, as they might take it. Clearing land and planting food would be pointless, as one could not be present at all times to prevent others from picking all the crop. Thus, a right of private property increases overall wealth (utility) as well; the toolmaker or farmer has more to give or trade to others.
Respect for the rights to life, liberty, and property implies ethical rules against killing, stealing, deception, and coercion.
Those who emphasize natural rights tend to emphasize the ethical character of the process by which people interact, seeing acts generally as likely to be ethical if they involve voluntary interactions and freely made exchanges where the parties are not coerced or deceived. This contrasts with other ethical standards or approaches that tend to focus on the result or state achieved by the interaction, for example, seeing an action as likely to be unethical if it leaves some people poor.
Negative and positive rights, or liberties and claim rights
When people speak of rights, they are often speaking about two quite different kinds of rights. In philosophy books, these rights are usually called negative and positive rights, but the terms liberties and claim rights are more descriptive of the distinction.22
Negative rights, or liberties, are rights to act without interference. The only obligation they impose on others is not to prevent you from acting. They include the right to life (in the sense that no one may kill you), the right to be free from assault, the right to use your property, the right to use your labor, skills, and mind to create goods and services and to trade with other people in voluntary exchanges. The rights to “life, liberty, and the pursuit of happiness” described in the U.S. Declaration of Independence are liberties, or negative rights. Freedom of speech and religion, as guaranteed in the First Amendment of the U.S. Constitution, are negative rights: the government may not interfere with you, jail you, or kill you because of what you say or what your religious beliefs are. The right to work, as a liberty, or negative right, means that no one may prohibit you from working or, for example, punish you for working without getting a government permit. The (negative) right to access the Internet is so obvious in free countries that we do not even think of it. Authoritarian governments restrict or deny it.
Claim rights, or positive rights, impose an obligation on some people to provide certain things for others. A positive right to a job means that someone must hire you
32 Chapter 1 Unwrapping the Gift
regardless of whether they voluntarily choose to, or that it is right, or obligatory, for the government to set up job programs for people who are out of work. A positive right to life means that some people are obligated to pay for food or medical care for others who cannot pay for them. When we interpret freedom of speech as a claim right, or positive right, it means that we may require owners of shopping malls, radio stations, and online services to provide space or time for content they do not wish to include. Access to the Internet, as a claim right, could require such things as taxes to provide subsidized access for poor people or foreign aid to provide access in poor countries. The last example suggests the following question: How far does the obligation to provide a positive right extend? Also, when thinking about what might be a positive, or claim, right, it is helpful to consider whether something should be a claim right if it depends on achieving a certain level of technology. For example, if access to the Internet is a positive right now, was it a positive right in the 1800s?
Here is a more fundamental problem: negative rights and positive rights often conflict. Some people think that liberties are almost worthless by themselves and that society must devise social and legal mechanisms to ensure that everyone has their claim rights, or positive rights, satisfied, even if that means diminishing the liberties of some. Other people think that there can be no (or very few) positive rights, because it is impossible to enforce claim rights for some people without violating the liberties of others. They see the protection of liberties, or negative rights, as ethically essential.
This is one of the reasons for disagreement on issues such as some privacy protection regulations, for example. Although we will not solve the disagreement about which kind of right is more important, we can sometimes clarify the issues in a debate by clarifying which kind of right we are discussing.
Golden rules
The Bible and Confucius tell us to treat others as we would want them to treat us. This is a valuable ethical guideline. It suggests a reciprocity, or a role reversal. We should not take the rule too literally however; we need to apply it at the appropriate level. It tells us to consider an ethical choice we are making from the perspective of the people it affects. No matter how much you enjoy fast driving on winding roads, it might not be kind to roar around those corners with a passenger who gets carsick easily. No matter how much you like your friends to share photos of you partying, it might not be good to share a photo of friend who prefers privacy. We want people to recognize us as individuals and to respect our choices. Thus, we should respect theirs.
Contributing to society
We are focusing on how to make ethical decisions. Some ethical theories take a wider goal: how to live a virtuous life. That is beyond the scope of this book, but some of the ideas relate to ethical choices. Aristotle says that one lives a virtuous life by doing virtuous acts. This leaves us with a question: What is a virtuous act? Most people would agree that
1.4 Ethics 33
helping to serve meals at a homeless shelter is a virtuous act. The view that this type of activity (doing unpaid charitable work) is the only or the main kind of virtuous act is common but is too limited. Suppose a nurse is choosing between spending one evening a week taking a course to learn new nursing skills or spending one evening a week helping at the homeless shelter. Or a programmer at a bank is choosing between a course on new computer security techniques and helping at the homeless shelter. There is nothing wrong with either choice. Is either one more virtuous than the other? The first choice increases the person’s professional status and possibly the person’s salary; you could see it as a selfish choice. The second choice is charitable work, helping unfortunate people. But the analysis should not stop there. A professional person, well trained and up-to-date in his or her profession, often can do far more to help a large number of people than the same person can accomplish performing low-skill tasks outside the person’s professional area. The fact that the person is paid for his or her work is not significant in evaluating its contribution. Doing one’s work (whether it is collecting garbage or performing brain surgery) honestly, responsibly, ethically, creatively, and well is a virtuous activity.
His philanthropy was in his work.
—Mike Godwin, writing about Apple co-founder Steve Jobs23
Social contracts and a theory of political justice24
Many topics we consider in this book go beyond individual ethical choices. They are social and legal policies. Thus we introduce (again, quite briefly) philosophical ideas about forming social and political systems.
The early foundations of social contract theory, the idea that people willingly submit to a common law in order to live in a civil society, are in the writings of Socrates and Plato but were not fully formed until the 1600s. Thomas Hobbes developed ideas of social contract theory in his book Leviathan (1651). Hobbes describes a starting point called the State of Nature, a dismal place where each man acts according to his own interests, no one is safe from physical harm, and there is no ability to ensure the satisfaction of one’s needs. Hobbes believed that man is rational and will seek a better situation, even at the cost of giving up some independence in favor of common law and accepting some authority to enforce this “social contract.” John Locke thought people could enforce moral rules, such as the rights to life, liberty and property, in a state of nature but that it was better to delegate this function to a government instituted by an implicit social contract.
The modern philosopher John Rawls25 took social contract theory further, developing provisions of the “contract” based on his view of justice as fairness. I will criticize parts of his work, but some of his points provide useful ethical guidelines. Rawls sought to establish principles for proper political power in a society with people of varying religions,
34 Chapter 1 Unwrapping the Gift
viewpoints, lifestyles, and so on. Rawls, like other social contract theorists, said that reasonable people, recognizing that a legal (or political) structure is necessary for social order, will want to cooperate on terms that all accept, and they will abide by the rules of society, even those they do not like. He argued that political power is proper only if we would expect all citizens to reasonably endorse its basic, or constitutional, principles. Tolerance is essential because deep questions are difficult, we answer them differently based on our life experiences, and people of good will can disagree. Thus, a proper political system protects basic civil liberties such as freedom of speech and free choice of occupation. It will not impose the views of some on the others.
To this point, Rawls’ foundation is consistent with an emphasis on liberties (nega- tive rights). Rawls distinguishes his system of justice by adding a strong requirement for claim rights (positive rights): a just and fair political system will ensure that all citizens have sufficient means to make effective use of their freedoms.� To Rawls, government financing of election campaigns is an essential feature of the system. This is a very specific political policy; people hotly debate its fairness and practical consequences. Rawls has made a leap that appears inconsistent with his emphasis that people of good will disagree on important issues and that a proper political system does not impose the views of one group on another.
In Rawls’ view, an action or a social or political structure is not ethical if it has the effect of leaving the least-advantaged people worse than they were before (or would be in some alternative system). Thus, in a sense, Rawls gives far more weight (indeed, infinite weight) to the utility of the least-advantaged people than to anyone else. This is odd as an absolute rule, and its fairness is not obvious. His emphasis on concern for the least well off, however, is a reminder to consider impacts on such people; a loss or harm to them can be more devastating than to someone in a better position.
Rawls proposed a conceptual formulation termed the “veil of ignorance” for deriving the proper principles or policies of a just social or political system. By extension, we can use it as a tool for considering ethical and social issues in this book. We imagine that each person behind the veil of ignorance does not know his or her gender, age, race, talents, wealth, and so on, in the real world. Behind the veil of ignorance, we choose policies that would be fair for all, protecting the most vulnerable and least-advantaged members of society. Many writers use this tool to derive what they conclude to be the correct ethical positions on social policy issues. I find that sometimes when I go behind the veil of ignorance, I come to a different conclusion than the author. The tool is useful, like the principles of the ethical theories we described earlier, but, like them, it is not absolute. Even ignoring our status in society, people of good will come to different conclusions
� The meaning of fairness is not obvious. In various contexts and to different people, it can mean being judged on one’s merits rather than irrelevant factors, getting an equal share, or getting what one deserves.
1.4 Ethics 35
because of their knowledge of human behavior and economics and their understanding of how the world works.�
We illustrate with a policy example. The Children’s Online Privacy Protection Act (COPPA) is a privacy law intended to protect a vulnerable population by requiring that websites get parental permission before collecting personal information from children under 13. After COPPA passed, because of the expense of complying with its requirements and the potential liability, some companies deleted online profiles of all children under 13, some canceled their free email and home pages for kids, and some banned children under 13 entirely. The New York Times does not allow children under 13 to register to use its website. Facebook’s terms of use prohibit children under 13 from joining, but Consumer Reports estimates that more than seven million children under 13 have ignored the rule and joined.26 The fiction that there are no members under 13 implies there is no need to provide mechanisms to protect them. Economists would have predicted these effects. We might have come up with COPPA behind a veil of ignorance, but it is not clear how well it actually helps and protects children. More knowledge helps us make better decisions and design better policies and laws.
No simple answers
We cannot solve ethical problems by applying a formula or an algorithm. Human behavior and real human situations are complex. There are often trade-offs to consider. Ethical theories do not provide clear, incontrovertibly correct positions on most issues. We can use the approaches we described to support opposite sides of many an issue. For example, consider Kant’s imperative that one must never treat people as merely means to ends, but rather as ends in themselves. We could argue that an employer who pays an employee a very low wage, say, a wage too low to support a family, is wrongly treating the employee as merely a means for the employer to make money. But we could also argue that expecting the employer to pay more than he or she considers reasonable is treating the employer merely as a means of providing income for the employee. Similarly, it is easy for two utilitarians to come to different conclusions on a particular issue by measuring happiness or utility differently. A small set of basic natural rights might provide no guidance for many situations in which you must make ethical decisions—however, if we try to define rights to cover more situations, there will be fierce disagreement about just what those rights should be.
Although ethical theories do not completely settle difficult, controversial issues, they help to identify important principles or guidelines. They remind us of things to consider, and they can help clarify reasoning and values. There is much merit in Kant’s principle of
� Rawls specifies that we assume people behind the veil of ignorance have knowledge of accepted economic principles, but in fact many philosophers and ordinary people do not—and of course, people will disagree about what is accepted.
36 Chapter 1 Unwrapping the Gift
Do organizations have ethics?
Some philosophers argue that it is meaningless to speak of a business or organization as having ethics. Individual people make all decisions and take all actions. Those people must have ethical responsibility for everything they do. Others argue that an organization that acts with intention and a formal decision structure, such as a business, is a moral entity.27 However, viewing a business as a moral entity does not diminish the responsibility of the individual people. Ultimately, it is individuals who are making decisions and taking actions. We can hold both the individuals and the company or organization responsible for their acts.*
Whether one accepts or rejects the idea that a business can have ethical rights and responsi- bilities, it is clear that organizational structure and policies lead to a pattern of actions and decisions that have ethical content. Businesses
have a “corporate culture,” or a “personality,” or simply a reputation for treating employees and customers in respectful and honest—or careless and deceptive—ways. People in man- agement positions shape the culture or ethics of a business or organization. Thus, decisions by managers have an impact beyond the par- ticular product, contract, or action a decision involves. A manager who is dishonest with customers or who cuts corners on testing, for example, is setting an example that encourages other employees to be dishonest and careless. A manager’s ethical responsibility includes his or her contribution to the company’s ethical personality.
* Regardless of whether or not we view businesses and organizations as moral agents, they are legal entities and can be held legally responsible for their acts.
universalism and his emphasis on treating people as intrinsically valuable “ends.” “Do not lie, manipulate, or deceive” is a good ethical principle. There is much merit in utilitarianism’s consideration of consequences and its standard of increasing achievement of people’s happiness. There is much merit in the natural rights approach of setting minimal rules in a rights framework to guarantee people a sphere in which they can act according to their own values and judgment. The Golden Rule reminds us to consider the perspective of the people our actions affect. Rawls reminds us that it is especially important to consider the impact of our choices on the least-advantaged people.
1.4.3 Some Important Distinctions
A number of important distinctions affect our ethical judgments, but they are often not clearly expressed or understood. In this section, we identify a few of these. Just being aware of these distinctions can help clarify issues in some ethical debates.
Right, wrong, and okay
In situations with ethical dilemmas, there are often many options that are ethically acceptable, with no specific one ethically required. Thus, it is misleading to divide all
1.4 Ethics 37
acts into two categories, ethically right and ethically wrong. Rather, it is better to think of acts as either ethically obligatory, ethically prohibited, or ethically acceptable. Many actions might be virtuous and desirable but not obligatory.
Distinguishing wrong and harm
Carelessly and needlessly causing harm is wrong, but it is important to remember that harm alone is not a sufficient criterion to determine that an act is unethical. Many ethical, even admirable acts can make other people worse off. For example, you may accept a job offer knowing someone else wanted the job and needed it more than you do. You may reduce the income of other people by producing a better product that consumers prefer. If your product is really good, you might put a competitor out of business completely and cause many people to lose their jobs. Yet there is nothing wrong with doing honest, productive work.
Declining to give something (say, $100) to someone is not the same ethically as taking the thing away from the person. Both actions leave the person less well off by $100 than they would be otherwise. But if we took that simplistic view of harm, the harm would be essentially the same. To identify harm as wrong, we must identify what the person is due, what his or her rights are, and what our rights and obligations are.
On the other hand, there can be wrong when there is no (obvious or immediate) harm. Some hackers argue that breaking into computer systems is not wrong, because they do no harm. Aside from the fact that the hacker might do unintended harm, one can argue that hacking is a violation of property rights: a person has no right to enter your property without your permission, independent of how much harm is done in any particular instance.
Separating goals from constraints
Economist Milton Friedman wrote that the goal or responsibility of a business is to make a profit for its shareholders. This statement appalled some ethicists, as they believe it justifies, or is used to justify, irresponsible and unethical actions. It seems to me that arguments on this point miss the distinction between goals, on the one hand, and constraints on actions that may be taken to achieve the goals, on the other hand—or the distinction between ends and means. Our personal goals might include financial success and finding an attractive mate. Working hard, investing wisely, and being an interesting and decent person can achieve these goals. Stealing and lying might achieve them too. Stealing and lying are ethically unacceptable. Ethics tells us what actions are acceptable or unacceptable in our attempts to achieve the goals. There is nothing unethical about a business having the goal of maximizing profits. The ethical character of the company depends on whether the actions taken to achieve the goal are consistent with ethical constraints.28
38 Chapter 1 Unwrapping the Gift
Personal preference and ethics
Most of us have strong feelings about a lot of issues. It might be difficult to draw a line between what we consider ethically right or wrong and what we personally approve or disapprove of.
Suppose you get a job offer from a company whose products you do not like. You might decline the job and say you are doing so on ethical grounds. Are you? Can you convincingly argue that anyone who takes the job is acting unethically? Most likely you cannot, and that is not what you actually think. You do not want to work for a company you do not like. This is a personal preference. There is nothing ethically wrong with declining the job, of course. The company’s freedom to produce its products does not impose an ethical obligation on you to assist it.
When discussing political or social issues, people frequently argue that their position is right in a moral or ethical sense or that an opponent’s position is morally wrong or unethical. People tend to want to be on the “moral high ground.” People feel the stigma of an accusation that their view is ethically wrong. Thus, arguments based on ethics can be, and often are, used to intimidate people with different views. It is a good idea to try to distinguish between actions we find distasteful, rude, or ill-advised and actions that we can argue convincingly are ethically wrong.
Law and ethics
What is the connection between law and ethics? Sometimes very little. Is it ethical to prohibit marijuana use by terminally ill people? Is it ethical for the government or a state university to give preference in contracts, hiring, or admissions to people in specific ethnic groups? Is it ethical for a bank loan officer to carry customer records on a laptop to work at the beach? The current law, whatever it happens to be at a particular time, does not answer these questions. In addition, history provides numerous examples of laws most of us consider profoundly wrong by ethical standards; slavery is perhaps the most obvious example. Ethics precedes law in the sense that ethical principles help determine whether or not we should pass specific laws.
Some laws enforce ethical rules (e.g., against murder and theft). By definition, we are ethically obligated to obey such laws—not because they are laws, but because the laws implement the obligations and prohibitions of ethical rules.
Another category of laws establishes conventions for business or other activities. Commercial law, such as the Uniform Commercial Code, defines rules for economic transactions and contracts. Such rules provide a framework in which we can interact smoothly and confidently with strangers. They include provisions for how to interpret a contract if a court must resolve a dispute. These laws are extremely important to any society and they should be consistent with ethics. Beyond basic ethical considerations, however, details could depend on historic conventions, practicality, and other nonethical criteria. In the United States, drivers must drive on the right side of the road; in England,
1.4 Ethics 39
drivers must drive on the left side. There is obviously nothing intrinsically right or wrong about either choice. However, once the convention is established, it is wrong to drive on the wrong side of the road because it needlessly endangers other people.
Unfortunately, many laws fall into a category that is not intended to implement eth- ical rules—or even be consistent with them. The political process is subject to pressure from special interest groups of all sorts who seek to pass laws that favor their groups or businesses. Examples include the laws (promoted by the television networks) that delayed the introduction of cable television and, later, laws (promoted by some cable television companies) to restrict satellite dishes. When margarine was first introduced, the dairy industry successfully lobbied for laws against coloring margarine yellow to look more like butter. After opposing re-sale auctions of event tickets for years, Ticketmaster ac- cepted this popular online sales paradigm—and lobbied for laws restricting competitors.29
Many prominent people in the financial industry reported receiving a large number of fundraising letters from members of Congress—in the week that Congress took up new regulations for their industry. Many political, religious, or ideological organizations pro- mote laws to require (or prohibit) certain kinds of behavior that the group considers desirable (or objectionable). Examples include prohibitions on teaching foreign languages in schools (in the early 20th century),30 prohibitions on gambling or alcohol, require- ments for recycling, and requirements that stores close on Sundays. At an extreme, in some countries, this category includes restrictions on the practice of certain religions. Some politicians or political parties pass laws, no matter how public-spirited they sound, purely to give themselves and their friends or donors advantages.
Copyright law has elements of all three categories we have described. It defines a property right, violation of which is a form of theft. Because of the intangible nature of intellectual property, some of the rules about what constitutes copyright infringement are more like the second category, pragmatic rules devised to be workable. Powerful groups (e.g., the publishing, music, and movie industries) lobby for specific rules to benefit themselves. This is why some violations of copyright law are clearly unethical (if one accepts the concept of intellectual property), yet others seem to be entirely acceptable, sometimes even noble.
Legislators and their staffs draft some laws in haste, and they make little sense. Some laws and regulations have hundreds or thousands of pages and are full of specific detail that make many ethical choices illegal. When members of Congress debate whether pizza is a vegetable,31 they are not debating an ethical issue.
Do we have an ethical obligation to obey a law just because it is a law? Some argue that we do: as members of society, we must accept the rules that the legislative process has created so long as they are not clearly and utterly ethically wrong. Others argue that, whereas this might often be a good policy, it is not an ethical obligation. Legislators are just a group of people, subject to errors and political influences; there is no reason to feel an ethical obligation to do something just because they say so. Indeed, some believe all
40 Chapter 1 Unwrapping the Gift
laws that regulate personal behavior or voluntary economic transactions to be violations of the liberty and autonomy of the people forced to obey and, hence, to be ethically wrong.
Is it always ethically right to do something that is legal? No. Laws must be uniform and stated in a way that clearly indicates what actions are punishable. Ethical situations are complex and variable; the people involved might know the relevant factors, but it might not be possible to prove them in court. There are widely accepted ethical rules that would be difficult and probably unwise to enforce absolutely with laws—for example: Do not lie. New law lags behind new technology for good reasons. It takes time to recognize new problems associated with the technology, consider possible solutions, think and debate about the consequences and fairness of various proposals, and so on. A good law will set minimal standards that can apply to all situations, leaving a large range of voluntary choices. Ethics fills the gap between the time when technology creates new problems and the time when legislatures pass reasonable laws. Ethics fills the gap between general legal standards that apply to all cases and the particular choices made in a specific case.
While it is not ethically obligatory to obey all laws, that is not an excuse to ignore laws, nor is a law (or lack of a law) an excuse to ignore ethics.
EXERCISES
Review Exercises
1.1 What were two unexpected uses of social networking?
1.2 What are two ways free services on the Web are paid for?
1.3 Describe two applications of speech recognition.
1.4 List two applications mentioned in this chapter that help ordinary people to do things for which we used to rely on experts.
1.5 What are two of Kant’s important ideas about ethics?
1.6 What is the difference between act utilitarianism and rule utilitarianism?
1.7 Give an example of a law that implements an ethical principle. Give an example of a law that enforces a particular group’s idea of how people should behave.
1.8 Explain the distinction between the negative and positive right to freedom of speech.
1.9 When one goes behind Rawls’ veil of ignorance, what is one ignorant of?
General Exercises
1.10 Write a short essay (roughly 300 words) about some topic related to computing technology or the Internet that interests you and has social or ethical implications. Describe the background; then identify the issues, problems, or questions that you think are important.
1.11 Christie’s (www.christies.com), an international auction house, was founded in 1766. So why was eBay a big deal?
Exercises 41
1.12 Some high schools ban use of cellphones during classes. Some require that students turn in their phones at the beginning of class and retrieve them afterwards. What are some reasons for these policies? Do you think they are good policies? Explain.
1.13 What are some advantages and disadvantages of online libraries (of entire books) as compared to “brick and mortar” libraries? Give at least five distinct replies in total.
1.14 It has become popular to post video on the Web showing people being rude, arguing, littering, and singing or dancing poorly. Is public shaming appropriate for these actions? Discuss some social and ethical considerations.
1.15 Describe a useful application, other than those mentioned near the end of Section 1.2.3, for a system with which the user controls a display with hand movements, without touching a screen or controls.
1.16 Think up some computerized device, software, or online service that does not yet exist, but that you would be very proud to help develop. Describe it.
1.17 List three applications of computing and communication technology mentioned in this chapter that reduce the need for transportation. What are some advantages of doing so?
1.18 For each of the following tasks, describe how it was probably done 25 years ago (before the World Wide Web). Briefly tell what the main difficulties or disadvantages of the older ways were. Tell if you think there were advantages.
(a) Getting a copy of a bill being debated in Congress (or your country’s legislature)
(b) Finding out if there are new treatments for lung cancer and how good they are
(c) Selling a poster advertising a Beatles concert from the 1960s
1.19 Many elderly people have trouble remembering words, people’s names, and recent events. Imagine a memory-aid product. What features would it have? What technologies would you use if you were designing it?
1.20 Which kind of ethical theory, deontologist or consequentialist, works better for arguing that it is wrong to drive one’s car on the left side of a road in a country where people normally drive on the right? Explain.
1.21 Develop a code of ethics and etiquette for use of cellphones. Include provisions for cameras in phones.
1.22 In the following (true) cases, tell whether the people are interpreting the right they claim as a negative right (liberty) or as a positive right (claim right). Explain. In each case, which kind of right should it be, and why?
(a) A man sued his health insurance company because it would not pay for Viagra, the drug for treating male impotence. He argued that the insurer’s refusal to pay denied his right to a happy sex life.
(b) Two legislators who ran for reelection lost. They sued an organization that sponsored ads criticizing their voting records. The former legislators argued that the organization interfered with their right to hold office.
1.23 Advocacy groups have sued companies because blind people cannot use their websites.32 The suits argue that the Americans With Disabilities Act requires that the sites be accessible. Should a law require all business and government websites to provide full access for disabled people? Discuss arguments for both sides. Identify the negative and positive rights involved. Which side do you think is stronger? Why?
42 Chapter 1 Unwrapping the Gift
1.24 If John Rawls were writing now, do you think he would include providing Internet access and cellphones for all citizens as an essential requirement of a just political system? Explain.
1.25 The campaign of a gubernatorial candidate distorted a digital image of his opponent in a television interview to make the opponent appear more menacing.33 Do you think this was an ethical action? Why? How does it differ from using a caricature?
1.26 Following a debate among political candidates during a campaign, you quietly record video of candidates talking with individuals from the audience. One candidate, responding sympathetically to a person complaining about how an insurance company handled his insurance claim, says, “All insurance company executives ought to be shot.” Another candidate, talking with a person who is angry about illegal immigration, says, “Anyone sneaking across the border illegally ought to be shot.” Another candidate, sprawled on a chair in the back of the room, is snoring. And a fourth, a man, invites an attractive woman back to his hotel to continue their conversation over drinks.
Discuss the ethics of posting videos of the candidates’ comments (or snoring) on the Web. Give reasons in favor of posting and reasons not to post.
Which, if any, would you post? To what extent would, or should, your support of or opposition to the candidate affect the decision?
1.27 (a) Thinking ahead to Chapter 2, identify an example, application, or service mentioned in this chapter that could have a major impact on our level of privacy. Briefly explain how.
(b) Thinking ahead to Chapter 3, identify an example, application, or service mentioned in this chapter that could pose a serious threat to freedom of speech. Briefly explain how.
(c) Thinking ahead to Chapter 8, identify an example, application, or service mentioned in this chapter where an error in the system could pose a serious danger to people’s lives. Briefly explain how.
Assignments
These exercises require some research or activity.
1.28 Go around your home and make a list of all the appliances and devices that contain a computer chip.
1.29 Arrange an interview with a disabled student on your campus. Ask the student to describe or demonstrate some of the computer tools he or she uses. (If your campus has a Disabled Student Center, its staff may be able to help you find an interview subject.) Write a report of the interview and/or demonstration.
1.30 Go to your campus library and view the microfilm or microfiche for an issue of the New York Times describing the first moon landing. (If you cannot find that, pick any old newspaper available in one of these media.) Read a few articles. Compare the convenience of using these media (standard for research not long ago) to reading articles on the Web.
1.31 Computing technology has had a huge impact on farming. We mentioned cow-milking machines and a few other applications in Section 1.2.3. Research a farming application and write a short report on it. (You may choose one in this book or something else.)
1.32 Research any one application of computing technology in health care and write a short report on it.
1.33 Over the next month or two (whatever is appropriate for the length of your course), collect news articles on (1) benefits and valuable applications of computer technology and (2) failures and/or
Books and Articles 43
problems that computer technology has caused. The articles should be current, that is, published during the time period of your course. Write a brief summary and commentary on two articles in each category indicating how they relate to topics covered in this book.
Class Discussion Exercises
These exercises are for class discussion, perhaps with short presentations prepared in advance by small groups of students.
1.34 The Encyclopaedia Britannica first appeared in print in 1768. It went online in 1994. In 2012, the publisher stopped printing the hardcopy version. Was this a sad event, a positive step, or an unimportant one? Are there risks in having major stores of historic knowledge only in electronic form?
1.35 Is it ethically acceptable or ethically prohibited for an advocacy group to launch a socialbot on a social media system such as Twitter that pretends to be a person and subtlely promotes the group’s viewpoint? Consider the same question for a socialbot that promotes a particular company’s products.
1.36 A car company offers as an option a system that will detect a pedestrian in the path of the car, warn the driver, and brake the car if the driver does not respond. The option costs $2000. If someone buys the car, does the person have an ethical obligation to buy the optional system to protect pedestrians?
BOOKS AND ARTICLES
Many of these references include topics that are covered throughout this book. Some of the references in Chapter 9 also include topics covered throughout this book.
. The Alliance for Technology Access, Com- puter Resources for People With Disabilities, 4th ed., Hunter House Publishers, 2004, www.ataccess.org.
. Stan Augarten, Bit by Bit: An Illustrated History of Computers, Ticknor & Fields, 1984. The early history, of course.
. Tim Berners-Lee, Usenet post describing the WorldWideWeb project, August 1991: groups.google.com/group/alt.hypertext/ msg/395f282a67a1916c.
. Frances Cairncross, The Death of Distance 2.0: How the Communications Revolution Is Changing Our Lives, Harvard Business School Press, 2001.
. Peter J. Denning, ed., The Invisible Future: The Seamless Integration of Technology Into Everyday Life, McGraw Hill, 2001.
. Peter Denning and Robert Metcalfe, Beyond Calculation: The Next Fifty Years of Comput- ing , Copernicus, 1997.
. Michael Dertouzos, What Will Be: How the New World of Information Will Change Our Lives, HarperEdge, 1997.
. Joseph Ellin, Morality and the Meaning of Life: An Introduction to Ethical Theory, Harcourt Brace Jovanovich, 1995.
. Neil A. Gershenfeld, When Things Start to Think, Henry Holt & Co., 1999.
. James Gleick, The Information: A History, a Theory, a Flood , Pantheon, 2011.
. Duncan Langford, ed., Internet Ethics, St. Martin’s Press, 2000.
44 Chapter 1 Unwrapping the Gift
. Steven Levy, In The Plex: How Google Thinks, Works, and Shapes Our Lives, Simon & Schuster, 2011.
. Ben McConnell and Jackie Huba, Citizen Marketers, Kaplan Publishing, 2006. How ordinary people influence other consumers, democratizing marketing.
. Joel Mokyr, The Gifts of Athena: Historical Origins of the Knowledge Economy, Princeton University Press, 2002.
. Jan Narveson, Moral Matters, Broadview Press, 1993. The first chapter gives a good, very readable introduction to moral issues.
. Louis P. Pojman and James Fieser, Ethical Theory: Classical and Contemporary Readings, 6th ed., Wadsworth, 2010. Includes John Stuart Mill’s “Utilitarianism,” Kant’s “The Foundations of the Metaphysic of Morals,” John Locke’s “Natural Rights,” and other classical essays on various ethical theories.
. Michael J. Quinn, Ethics for the Information Age, 4th ed., Addison Wesley, 2011.
. Glenn Reynolds, An Army of Davids: How Markets and Technology Empower Ordinary People to Beat Big Media, Big Government, and Other Goliaths, Nelson Current, 2006.
. Richard A. Spinello and Herman T. Tavani, eds., Readings in CyberEthics, Jones and Bartlett, 2001.
. Don Tapscott and Anthony D. Williams, Wikinomics: How Mass Collaboration Changes Everything , Portfolio, 2006.
. Sherry Turkle, Alone Together: Why We Expect More from Technology and Less from Each Other , Basic Books, 2011. Includes results from many interviews with teenagers and college students about the impact on them of personal communications media.
. Vernor Vinge, Rainbows End , Tor, 2006. A science fiction novel, set in the near future, that imagines how computer technology may affect communication, education, medical care, and many facets of ordinary life.
NOTES
1. Michael Rothschild, “Beyond Repair: The Politics of the Machine Age Are Hopelessly Obsolete,” The New Democrat , July/August 1995, pp. 8–11.
2. Stephen E. Ambrose, Undaunted Courage: Meriwether Lewis, Thomas Jefferson and the Opening of the American West , Simon & Schuster, 1996, p. 53.
3. Betty Friedan, The Feminine Mystique, W. W. Norton, 1963, p. 312.
4. In Chapters 1 and 2, respectively, of Peter H. Diamandis and Steven Kotler, Abundance: The Future Is Better Than You Think, Free Press, 2012. I paraphrased slightly.
5. Research centers doing this kind of research include the MIT Human Dynamics Laboratory, Harvard University, AT&T Labs, the London School of Economics, and others. For an overview of some of the research, see Robert Lee Hotz, “The Really Smart Phone,” Wall Street Journal , Apr. 22, 2011, online.wsj.com/article/SB1000142405274870454 7604576263261679848814.html, viewed Mar. 4, 2012.
6. Photos appear on the Web showing religious Muslim women with uncovered faces and other people in
bathrooms or locker rooms or other embarrassing or awkward situations. This problem was more acute when cameras first appeared in cellphones and most people were unaware of them.
7. Quoted in Robert Fox, “Newstrack,” Communications of the ACM , Aug. 1995, 38:8, pp. 11–12.
8. For a good overview, see Eric Beidel, “Social Scientists and Mathematicians Join The Hunt for Terrorists,” National Defense, Sept. 2010, www.nationaldefense magazine.org, viewed Mar. 8, 2012.
9. “Email Statistics Report, 2011–2015,” The Radi- cati Group, Inc., www.radicati.com; Heinz Tsch- abitscher, “How Many Emails Are Sent Every Day?” email.about.com/od/emailtrivia/f/emails_per_day.htm; both viewed Aug. 22, 2011.
10. Statement of Vinton G. Cerf, U.S. Senate Com- mittee on the Judiciary Hearing on Reconsider- ing our Communications Laws, June 14, 2006, judiciary.senate.gov/testimony.cfm?id=1937&wit_ id=5416.
Notes 45
11. Steven Leckart, “The Stanford Education Experiment,” Wired , April 2012, pp. 68–77.
12. Robert D. Atkinson, “Leveling the E-Commerce Playing Field: Ensuring Tax and Regulatory Fairness for Online and Offline Businesses,” Progressive Policy Institute Policy Report, June 30, 2003, www.ppionline.org, viewed Sept. 3, 2007. Jennifer Saranow, “Savvy Car Buyers Drive Bargains with Pricing Data from the Web,” Wall Street Journal , Oct. 24, 2006, p. D5.
13. The last line of the paragraph is a paraphrase of the head- line on an article Searle wrote, “Watson Doesn’t Know It Won on ‘Jeopardy!,’” Wall Street Journal , Feb. 17, 2011, online.wsj.com/article/SB10001424052748703407304 576154313126987674.html, viewed Mar. 6, 2012. The original Chinese room argument is in John Searle, “Minds, Brains and Programs,” Behavioral and Brain Sciences, Cambridge University Press, 1980, pp. 417– 424.
14. IBM did not disclose the cost. I have seen estimates of $30–$100 million.
15. William M. Bulkeley, “Profit in Motion: Tiny Sensors Take Off,” Wall Street Journal , May 10, 2007, p. B3.
16. Evan Ratliff, “Born to Run,” Wired , July 2001, pp. 86– 97. Rheo and Power Knees by Ossur, www.ossur.com, viewed Aug. 25, 2006. John Hockenberry, “The Human Brain,” Wired , Aug. 2001, pp. 94–105. Aaron Saenz, “Ekso Bionics Sells its First Set of Robot Legs Allowing Paraplegics to Walk,” Singularity Hub, Feb. 27, 2012, singularityhub.com/2012/02/27/ekso-bionics-sells- its-first-set-of-robot-legs-allowing-paraplegics-to-walk, viewed Mar. 7, 2012.
17. Hockenberry, “The Human Brain,” describes various brain interface devices.
18. By Louis P. Pojman (Wadsworth, 1990) and J. L. Mackie (Penguin Books, 1977), respectively.
19. Sources for this section include: Joseph Ellin, Moral- ity and the Meaning of Life: An Introduction to Ethical Theory, Harcourt Brace Jovanovich, 1995; Deborah G. Johnson, Computer Ethics, Prentice Hall, 2nd ed., 1994; Louis Pojman, Ethical Theory: Classical and Con- temporary Readings, 2nd ed., Wadsworth, 1995 (which includes John Stuart Mill’s “Utilitarianism,” Kant’s “The Foundations of the Metaphysic of Morals,” and John Locke’s “Natural Rights”); and James Rachels, The Ele- ments of Moral Philosophy, McGraw Hill, 1993; “John Locke (1632–1704),” Internet Encyclopedia of Philos- ophy, Apr. 17, 2001, www.iep.utm.edu/locke, viewed Mar. 16, 2012; Celeste Friend, “Social Contract The- ory,” Internet Encyclopedia of Philosophy, Oct. 15, 2004,
www.iep.utm.edu/soc-cont, viewed Mar. 16, 2012; Sharon A. Lloyd and Susanne Sreedhar, “Hobbes’s Moral and Political Philosophy,” The Stanford Encyclopedia of Philosophy (Spring 2011 Edition), Edward N. Zalta (ed.), plato.stanford.edu/archives/spr2011/entries/hobbes- moral; Leif Wenar, “John Rawls,” The Stanford Encyclo- pedia of Philosophy (Fall 2008 Edition), Edward N. Zalta, ed., plato.stanford.edu/archives/fall2008/entries/rawls, viewed Mar. 16, 2012.
20. John Stuart Mill, Utilitarianism, 1863. 21. John Locke, Two Treatises of Government , 1690. 22. J. L. Mackie uses the term claim rights in Ethics: Inventing
Right and Wrong . Another term for positive rights is entitlements.
23. Slightly paraphrased from Mike Godwin, “Steve Jobs, the Inhumane Humanist,” Reason, Jan. 10, 2012, reason.com/archives/2012/01/10/steve-jobs-the- inhumane-humanist, viewed Mar. 14, 2012.
24. Julie Johnson assisted with the background for this section.
25. A Theory of Justice, 1971, and Justice as Fairness, 2001. 26. “That Facebook Friend Might Be 10 Years Old,
and Other Troubling News,” Consumer Reports, June 2011, www.consumerreports.org/cro/magazine- archive/2011/june/electronics-computers/state-of-the- net/facebook-concerns/index.htm, viewed Jan. 22, 2012.
27. Kenneth C. Laudon, “Ethical Concepts and Information Technology,” Communications of the ACM , Dec. 1995, 38:12, p. 38.
28. Some goals appear to be ethically wrong in themselves— for example, genocide—although often it is because the only way to achieve the goal is by methods that are ethically unacceptable (killing innocent people).
29. Kent Smetters, “Ticketmaster vs. Ticket Buyers,” American Enterprise Institute, Oct. 24, 2006, www.aei.org/publications/pubID.25049,filter.all/pub_ detail.asp, viewed Nov. 1, 2006.
30. Nebraska, for example, banned teaching foreign languages in public or private schools below ninth grade.
31. Specifically, whether the tomato sauce should be counted as a vegetable to satisfy health requirements for school lunches.
32. For example, National Federation for the Blind v. Target Corporation.
33. Anne Branscomb, Who Owns Information? , Basic Books, 1994, pp. 73–75.
This page intentionally left blank
2 Privacy
2.1 Privacy Risks and Principles
2.2 The Fourth Amendment, Expectation of Privacy, and Surveillance
Technologies
2.3 The Business and Social Sectors
2.4 Government Systems
2.5 Protecting Privacy: Technology, Markets, Rights, and Laws
2.6 Communications
Exercises
48 Chapter 2 Privacy
2.1 Privacy Risks and Principles
2.1.1 What Is Privacy?
After the fall of the communist government in East Germany, people examined the files of Stasi, the secret police. They found that the government had used spies and informants to build detailed dossiers on the opinions and activities of roughly six million people— a third of the population. The informers were neighbors, co-workers, friends, and even family members of the people they reported on. The paper files filled an estimated 125 miles of shelf space.1
Before the digital age, surveillance cameras watched shoppers in banks and stores. And well into the era of computers and the Internet, pharmacies in Indiana disposed of hundreds of prescriptions, receipts, and order forms for medicines by tossing them into an open dumpster. Private investigators still search household garbage for medical and financial information, details of purchases, evidence of romantic affairs, and journalists’ notes.
Computer technology is not necessary for the invasion of privacy. However, we discuss privacy at length in this book because the use of digital technology has made new threats possible and old threats more potent. Computer technologies—databases, digital cameras, the Web, smartphones, and global positioning system (GPS) devices, among others—have profoundly changed what people can know about us and how they can use that information. Understanding the risks and problems is a first step toward protecting privacy. For computer professionals, understanding the risks and problems is a step toward designing systems with built-in privacy protections and less risk.
There are three key aspects of privacy:
. Freedom from intrusion—being left alone
. Control of information about oneself
. Freedom from surveillance (from being followed, tracked, watched, and eaves- dropped upon)
For the most part, in this book, we view privacy as a good thing. Critics of privacy argue that it gives cover to deception, hypocrisy, and wrongdoing. It allows fraud. It protects the guilty. Concern for privacy may be regarded with a suspicious “What do you have to hide?” The desire to keep things private does not mean we are doing anything wrong. We might wish to keep health, relationship, and family issues private. We might wish to keep religious beliefs and political views private from some of the people we interact with. Privacy of some kinds of information can be important to safety and security as well. Examples include travel plans, financial data, and for some people, simply a home address.
2.1 Privacy Risks and Principles 49
Privacy threats come in several categories:
. Intentional, institutional uses of personal information (in the government sector primarily for law enforcement and tax collection, and in the private sector primarily for marketing and decision making)
. Unauthorized use or release by “insiders,” the people who maintain the informa- tion
. Theft of information
. Inadvertent leakage of information through negligence or carelessness
. Our own actions (sometimes intentional trade-offs and sometimes when we are unaware of the risks)
Privacy issues arise in many contexts. More topics with privacy implications appear in later chapters. We discuss spam, the intrusion of junk email and text messages, in Chapter 3. We address hacking and identity theft in Chapter 5. We discuss monitoring of workplace communications and other issues of privacy for employees in Chapter 6. Some privacy risks result from the fact that so much of the data stored about us is incorrect. Databases contain errors. Files are not updated. Records of different people with similar names or other similarities get comingled or confused. Chapter 8 discusses some of these problems. Privacy comes up again in Chapter 9, where we focus on the responsibilities of computer professionals.
It is clear that we cannot expect complete privacy. We usually do not accuse someone who initiates a conversation of invading our privacy. Many friends and slight acquain- tances know what you look like, where you work, what kind of car you drive, and whether you are a nice person. They need not get your permission to observe and talk about you. Control of information about oneself means control of what is in other people’s minds, phones, and data storage systems. It is necessarily limited by basic human rights, partic- ularly freedom of speech. Nor can we expect to be totally free from surveillance. People see us and hear us when we move about in public (physically or on the Web).
If you live in a small town, you have little privacy; everyone knows everything about you. In a big city, you are more nearly anonymous. But if people know nothing about you, they might be taking a big risk if they rent you a place to live, hire you, lend you money, sell you automobile insurance, accept your credit card, and so on. We give up some privacy for the benefits of dealing with strangers. We can choose to give up more in exchange for other benefits such as convenience, personalized service, and easy communication with many friends. But sometimes, others make the choices for us.
I use many real incidents, businesses, products, and services as examples throughout this book. In most cases, I am not singling them out for special endorsement or criticism. They are just some of the many examples we can use to illustrate problems, issues, and possible solutions.
50 Chapter 2 Privacy
The man who is compelled to live every minute of his life among others and whose every need, thought, desire, fancy or gratification is subject to public scrutiny, has been deprived of his individuality and human dignity. [He] merges with the mass. . . . Such a being, although sentient, is fungible; he is not an individual.
—Edward J. Bloustein2
It’s important to realize that privacy preserves not personal secrets, but a sense of safety within a circle of friends so that the individual can be more candid, more expressive, more open with “secrets.”
—Robert Ellis Smith3
2.1.2 New Technology, New Risks
Computers, the Internet, and a whole array of digital devices—with their astounding increases in speed, storage space, and connectivity—make the collection, searching, analysis, storage, access, and distribution of huge amounts of information and images much easier, cheaper, and faster than ever before. These are great benefits. But when the information is about us, the same capabilities threaten our privacy.
Today there are thousands (probably millions) of databases, both government and private, containing personal information about us. In the past, there was simply no record of some of this information, such as our specific purchases of groceries and books. Government documents like divorce and bankruptcy records have long been in public records, but accessing such information took a lot of time and effort. When we browsed in a library or store, no one knew what we read or looked at. It was not easy to link together our financial, work, and family records. Now, large companies that operate video, email, social network, and search services can combine information from a member’s use of all of them to obtain a detailed picture of the person’s interests, opinions, realtionships, habits, and activities. Even if we do not log in as members, software tracks our activity on the Web. In the past, conversations disappeared when people finished speaking, and only the sender and the recipient normally read personal communications. Now, when we communicate by texting, email, social networks, and so on, there is a record of our words that others can copy, forward, distribute widely, and read years later. Miniaturization of processors and sensors put tiny cameras in cellphones that millions of people carry everywhere. Cameras in some 3-D television sets warn children if they are sitting too close. What else might such cameras record, and who might see it? The wireless appliances we carry contain GPS and other location devices. They enable others to determine our location and track our movements. Patients refill prescriptions and check the results of medical tests on the Web. They correspond with doctors by email. We store our photos
2.1 Privacy Risks and Principles 51
and videos, do our taxes, and create and store documents and financial spreadsheets in a cloud of remote servers instead of on our own computer. Power and water providers might soon have metering and analysis systems sophisticated enough to deduce what appliances we are using, when we shower (and for how long), and when we sleep. Law enforcement agencies have very sophisticated tools for eavesdropping, surveillance, and collecting and analyzing data about people’s activities, tools that can help reduce crime and increase security—or threaten privacy and liberty.
Combining powerful new tools and applications can have astonishing results. It is possible to snap a photo of someone on the street, match the photo to one on a social network, and use a trove of publicly accessible information to guess, with high probability of accuracy, the person’s name, birth date, and most of his or her Social Security number. This does not require a supercomputer; it is done with a smartphone app. We see such systems in television shows and movies, but to most people they seem exaggerated or way off in the future.
All these gadgets, services, and activities have benefits, of course, but they expose us to new risks. The implications for privacy are profound.
Patient medical information is confidential. It should not be discussed in a public place.
—A sign, aimed at doctors and staff, in an elevator in a medical office building, a reminder to prevent low-tech privacy leaks.
Example: Search query data
After a person enters a phrase into a search engine, views some results, then goes on to another task, he or she expects that the phrase is gone—gone like a conversation with a friend or a few words spoken to a clerk in a store. After all, with millions of people doing searches each day for work, school, or personal uses, how could the search company store it all? And who would want all that trivial information anyway? That is what most people thought about search queries until two incidents demonstrated that it is indeed stored, it can be released, and it matters.
Search engines collect many terabytes of data daily. A terabyte is a trillion bytes. It would have been absurdly expensive to store that much data in the recent past, but no longer. Why do search engine companies store search queries? It is tempting to say “because they can.” But there are many uses for the data. Suppose, for example, you search for “Milky Way.” Whether you get lots of astronomy pages or information about the candy bar or a local restaurant can depend on your search history and other information about you. Search engine companies want to know how many pages of search results users actually look at, how many they click on, how they refine their search queries, and what spelling errors they commonly make. The companies analyze the data to improve
52 Chapter 2 Privacy
search services, to target advertising better, and to develop new services. The database of past queries also provides realistic input for testing and evaluating modifications in the algorithms search engines use to select and rank results. Search query data are valuable to many companies besides search engine companies. By analyzing search queries, companies draw conclusions about what kinds of products and features people are looking for. They modify their products to meet consumer preferences.
But who else gets to see this mass of data? And why should we care? If your own Web searches have been on innocuous topics, and you do not care who
sees your queries, consider a few topics people might search for and think about why they might want to keep them private: health and psychological problems, bankruptcy, uncontrolled gambling, right-wing conspiracies, left-wing conspiracies, alcoholism, anti- abortion information, pro-abortion information, erotica, illegal drugs. What are some possible consequences for a person doing extensive research on the Web for a suspense novel about terrorists who plan to blow up chemical factories?
In 2006, the federal government presented Google with a subpoena� for two months of user search queries and all the Web addresses† that Google indexes.‡ Google protested, bringing the issue to public attention. Although the subpoena did not ask for names of users, the idea of the government gaining access to the details of people’s searches horrified privacy advocates and many people who use search engines. Google and privacy advocates opposed the precedent of government access to large masses of such data. A court reduced the scope of the subpoena, removing user queries.4
A few months later, release of a huge database of search queries at AOL showed that privacy violations occur even when the company does not associate the queries with peo- ple’s names. Against company policy, an employee put the data on a website for search technology researchers. This data included more than 20 million search queries by more than 650,000 people from a three-month period. The data identified people by coded ID numbers, not by name. However, it was not difficult to deduce the identity of some people, especially those who searched on their own name or address. A process called re- identification identified others. Re-identification means identifying the individual from a set of anonymous data. Journalists and acquaintances identified people in small com- munities who searched on numerous specific topics, such as the cars they own, the sports teams they follow, their health problems, and their hobbies. Once identified, a person is linked to all his or her other searches. AOL quickly removed the data, but journalists,
� A subpoena is a court order for someone to give testimony or provide documents or other information for an investigation or a trial. † We use the term Web address informally for identifiers, or addresses, or URLs of pages or documents on the Web (the string of characters one types in a Web browser). ‡ It wanted the data to respond to court challenges to the Child Online Protection Act (COPA), a law intended to protect children from online material “harmful to minors.” (We discuss COPA in Section 3.2.2.)
2.1 Privacy Risks and Principles 53
researchers, and others had already copied it. Some made the whole data set available on the Web again.5�
Example: Smartphones
With so many clever, useful, and free smartphone apps available, who thinks twice about downloading them? Researchers and journalists took a close look at smartphone software and apps and found some surprises.
Some Android phones and iPhones send location data (essentially the location of nearby cell towers) to Google and Apple, respectively. Companies use the data to build location-based services that can be quite valuable for the public and for the companies. (Industry researchers estimate the market for location services to be in the billions of dollars.) The location data is supposed to be anonymous, but researchers found, in some cases, that it included the phone ID.
Roughly half the apps in one test sent the phone’s ID number or location to other companies (in addition to the one that provided the app). Some sent age and gender in- formation to advertising companies. The apps sent the data without the user’s knowledge or consent. Various apps copy the user’s contact list to remote servers. Android phones and iPhones allow apps to copy photos (and, for example, post them on the Internet) if the user permits the app to do certain other things that have nothing to do with photos. (Google said this capability dated from when photos were on removable memory cards and thus less vulnerable.6 This is a reminder that designers must regularly review and update security design decisions.)
A major bank announced that its free mobile banking app inadvertently stored account numbers and security access codes in a hidden file on the user’s phone. A phone maker found a flaw in its phones that allowed apps to access email addresses and texting data without the owner’s permission. Some iPhones stored months of data, in a hidden file, about where the phone had been and when, even if the user had turned off location services. Data in such files are vulnerable to loss, hacking, and misuse. If you do not know the phone stores the information, you do not know to erase it. Given the complexity of smartphone software, it is possible that the companies honestly did not intend the phones to do these things.†
Why does it matter? Our contact lists and photos are ours; we should have control of them. Thieves can use our account information to rob us. Apps use features on phones that indicate the phone’s location, the light level, movement of the phone, the presence of other phones nearby, and so on. Knowing where we have been over a period of time (combined with other information from a phone) can tell a lot about our activities and
� Members of AOL sued the company for releasing their search queries, claiming the release violated roughly 10 federal and state laws. † The various companies provided software updates for these problems.
54 Chapter 2 Privacy
1. Files on hundreds of thousands of students, applicants, faculty, and/or alumni from the University of California, Harvard, Georgia Tech, Kent State, and several other universities, some with Social Security numbers and birth dates (stolen by hackers).
2. Names, birth dates, and possibly credit card numbers of 77 million people who play video games online using Sony’s PlayStation (stolen by hackers). Another 24 million accounts were exposed when hackers broke into Sony Online Entertainment’s PC-game service.
3. Records of roughly 40 million customers of TJX discount clothing stores (T.J. Maxx,
More about the TJX incident: Section 5.2.5
Marshalls, and others), including credit and debit card numbers and some driver’s license numbers (stolen by hackers).
4. Bank of America disks with account information (lost or stolen in transit).
5. Credit histories and other personal data for 163,000 people (purchased from a huge database company by a fraud ring posing as legitimate businesses).
6. Patient names, Social Security numbers, addresses, dates of birth, and medical billing information for perhaps 400,000 patients at a hospital (on a laptop stolen from a hospital employee’s car).
7. More than 1000 Commerce Department laptops, some with personal data from Census questionnaires. (Thieves stole some from the cars of temporary Census employees; others, employees simply kept.)
8. Confidential contact information for more than one million job seekers (stolen from Monster.com by hackers using servers in Ukraine).
Figure 2.1 Lost or stolen personal information.7
interests, as well as with whom we associate (and whether the lights were on). As we mentioned in Section 1.2.1, it can also indicate where we are likely to be at a particular time in the future.
Some of the problems we described here will have been addressed by the time you read this; the point is that we are likely to see similar (but similarly unexpected) privacy risks and breaches in each new kind of gadget or capability.
Stolen and lost data
Criminals steal personal data by hacking into computer systems, by stealing computers and disks, by buying or requesting records under false pretenses, and by bribing employees
Hacking: Section 5.2 of companies that store the data. Shady information brokers sell data (including cellphone records, credit reports, credit card statements,
medical and work records, and location of relatives, as well as information about financial and investment accounts) that they obtain illegally or by questionable means. Criminals, lawyers, private investigators, spouses, ex-spouses, and law enforcement agents are among the buyers. A private investigator could have obtained some of this information in the past, but not nearly so easily, cheaply, and quickly.
2.1 Privacy Risks and Principles 55
Another risk is accidental (sometimes quite careless) loss. Businesses, government agencies, and other institutions lose computers, disks, memory cards, and laptops con- taining sensitive personal data (such as Social Security numbers and credit card numbers) on thousands or millions of people, exposing people to potential misuse of their infor- mation and lingering uncertainty. They inadvertently allow sensitive files to be public on the Web. Researchers found medical information, Social Security numbers, and other sensitive personal or confidential information about thousands of people in files on the Web that simply had the wrong access status.
The websites of some businesses, organizations, and government agencies that make account information available on the Web do not sufficiently authenticate the person ac-
More about authen- tication techniques: Section 5.3.2
cessing the information, allowing imposters access. Data thieves often get sensitive information by telephone by pretending to be the per- son whose records they seek. They provide some personal information about their target to make their request seem legitimate. That is one
reason why it is important to be cautious even with data that is not particularly sensitive by itself.
Figure 2.1 shows a small sample of incidents of stolen or lost personal information (the Privacy Rights Clearinghouse lists thousands of such incidents on its website). In many incidents, the goal of thieves is to collect data for use in identity theft and fraud, crimes we discuss in detail in Chapter 5.
A summary of risks
The examples we described illustrate numerous points about personal data. We summarize here:
. Anything we do in cyberspace is recorded, at least briefly, and linked to our computer or phone, and possibly our name.
. With the huge amount of storage space available, companies, organizations, and governments save huge amounts of data that no one would have imagined saving in the recent past.
. People often are not aware of the collection of information about them and their activities.
. Software is extremely complex. Sometimes businesses, organizations, and website managers do not even know what the software they use collects and stores.8
. Leaks happen. The existence of the data presents a risk.
. A collection of many small items of information can give a fairly detailed picture of a person’s life.
. Direct association with a person’s name is not essential for compromising privacy. Re-identification has become much easier due to the quantity of personal infor- mation stored and the power of data search and analysis tools.
56 Chapter 2 Privacy
. If information is on a public website, people other than those for whom it was intended will find it. It is available to everyone.
. Once information goes on the Internet or into a database, it seems to last forever. People (and automated software) quickly make and distribute copies. It is almost impossible to remove released information from circulation.
. It is extremely likely that data collected for one purpose (such as making a phone call or responding to a search query) will find other uses (such as business planning, tracking, marketing, or criminal investigations).
. The government sometimes requests or demands sensitive personal data held by businesses and organizations.
. We often cannot directly protect information about ourselves. We depend on the businesses and organizations that manage it to protect it from thieves, accidental collection, leaks, and government prying.
2.1.3 Terminology and Principles for Managing Personal Data
We use the term personal information often in this chapter. In the context of privacy issues, it includes any information relating to, or traceable to, an individual person. The term does not apply solely to what we might think of as sensitive information, although it includes that. It also includes information associated with a particular person’s “handle,” user name, online nickname, identification number, email address, or phone number. Nor does it refer only to text. It extends to any information, including images, from which someone can identify a living individual.
Informed consent and invisible information gathering
The first principle for ethical treatment of personal information is informed consent . There is an extraordinary range to the amount of privacy different people want. Some blog about their divorce or illnesses. Some pour out details of their romantic relationships on television shows or to hundreds of social network friends. Others use cash to avoid leaving a record of their purchases, encrypt all their email,� and are angry when someone collects information about them. When a business or organization informs people about its data collection and use policies or about the data that a particular device or application collects, each person can decide, according to his or her own values, whether or not to interact with that business or organization or whether to use the device or application.
Invisible information gathering describes collection of personal information without the person’s knowledge. The important ethical issue is that if someone is not aware of the collection and use, he or she has no opportunity to consent or withhold consent. We gave
� Encrypting data means putting it in a coded form so that others cannot read it.
2.1 Privacy Risks and Principles 57
several examples involving smartphones and their apps in the previous section. Here are examples from other contexts.
. A company offered a free program that changed a Web browser’s cursor into a car- toon character. Millions of people installed the program but then later discovered that the program sent to the company a report of the websites its users visited, along with a customer identification number in the software.9
. “Event data recorders” in cars record driving speed, whether or not the driver is wearing a seatbelt, and other information.
. “History sniffers” are programs that collect information about a person’s online activity based on the different colors a browser uses to display sites recently visited.
. Software called spyware, often downloaded from a website without the user’s knowledge, surreptitiously collects information about a person’s activity and data on his or her computer and then sends the information over the Internet to the person or company that planted the spyware. Spyware can track someone’s Web
Sophisticated snoop- ing technologies: Section 2.2.2
surfing for an advertising company or collect passwords and credit card numbers typed by the user. (Some of these activities are illegal, of course.)
When our computers and phones communicate with websites, they must provide in- formation about their configuration (e.g., the Web browser used). For a high percentage of computers, there is enough variation and detail in configurations to create a “finger- print” for each computer. Some companies provide device fingerprinting software for combating fraud and intellectual property theft and for tracking people’s online activity in order to target advertising. Both collection of configuration information and building of activity profiles are invisible. Financial firms that use device fingerprinting for security of customer accounts are likely to say so in a privacy policy. We are less likely to know when someone is using it to build marketing profiles.
Whether or not a particular example of data collection is invisible information gath- ering can depend on the level of public awareness. Some people know about event data recorders in cars; most do not.10 Before the release of AOL user search data described in Section 2.1.2, collecting search query data was an example of invisible information gather- ing; for many people it still is. Many businesses and organizations have policy statements
A legal remedy for secret data collection: Section 5.2.6
or customer agreements that inform customers, members, and sub- scribers of their policy on collecting and using personal data, but many people simply do not read them. And if they read them, they forget. Thus, there can be a significant privacy impact from the many auto-
mated systems that collect information in unobvious ways, even when people have been informed. However, there is an important distinction between situations where people are informed but not aware and situations where the information gathering is truly covert, such as in spyware and in some of the smartphone apps we described in Section 2.1.2.
58 Chapter 2 Privacy
Cookies
Cookies are files a website stores on a visitor’s computer.11 Within the cookie, the site stores and then uses information about the visitor’s activity. For example, a retail site might store information about products we looked at and the contents of our virtual “shopping cart.” On subsequent visits, the site retrieves information from the cookie. Cookies help companies provide personalized customer service and target advertising to the interests of each visitor. They can also track our activities on many
sites and combine the information. At first, cookies were controversial because the very idea that websites were storing files on the user’s computer without the user’s knowledge startled and disturbed people. Today, more people are aware of cookies and use tools to prevent or delete them. In response, some companies that track online activity developed more sophisticated “supercookies” that recreate deleted cookies and are difficult to find and remove.
Secondary use, data mining, matching, and profiling
My most private thoughts, my personal tragedies, secrets about other people, are mere data of a transaction, like a grocery receipt.
—A woman whose psychologist’s notes were read by an insurer.12
Secondary use is the use of personal information for a purpose other than the one for which the person supplied it. Examples include sale of consumer information to marketers or other businesses, use of information in various databases to deny someone a job or to tailor a political pitch, the Internal Revenue Service searching vehicle registration records for people who own expensive cars and boats (to find people with high incomes), use of text messages by police to prosecute someone for a crime, and the use of a supermarket’s customer database to show alcohol purchases by a man who sued the store because he fell down.
Data mining means searching and analyzing masses of data to find patterns and develop new information or knowledge. The research using social network data and smartphone data that we described in Section 1.2.1 are examples. Matching means combining and comparing information from different databases, often using an identifier such as a person’s Social Security number or their computer’s Internet address to match records. Profiling means analyzing data to determine characteristics of people most likely to engage in certain behavior. Businesses use these techniques to find likely new customers. Government agencies use them to detect fraud, to enforce other laws, and to find terrorists. Data mining, computer matching, and profiling are, in most cases, examples of secondary use of personal information.
2.1 Privacy Risks and Principles 59
We will see examples of secondary use throughout this chapter. One of the contro- versial issues about personal information is the degree of control people should have over secondary uses of information about them. The variety of uses illustrated by the few ex- amples we gave above suggests that quite different answers are appropriate for different users and different uses.
After informing people about what personal information an organization collects and what it does with that information, the next simplest and most desirable privacy policy is to give people some control over secondary uses. The two most common forms for providing such choice are opt out and opt in. Under an opt-out policy, one must check or click a box on a contract, membership form, or agreement or contact the organization to request that they not use one’s information in a particular way. If the person does not take action, the presumption is that the organization may use the information. Under an opt-in policy, the collector of the information may not use it for secondary uses unless the person explicitly checks or clicks a box or signs a form permitting the use. (Be careful not to confuse the two. Under an opt-out policy, more people are likely to be “in,” and under an opt-in policy, more people are likely to be “out,” because the default presumption is the opposite of the policy name.) Opt-out options are now common. Responsible, consumer-friendly companies and organizations often set the default so that they do not share personal information and do not send marketing emails unless the person explicitly allows it— that is, they use the opt-in policy. Particularly in situations where disclosing personal information can have negative consequences and it is not obvious to a customer that the organization might disclose it, a default of nondisclosure without explicit permission (that is, an opt-in policy) is the responsible policy.
Fair information principles
Privacy advocates have developed various sets of principles for protection of personal data. They are often called Fair Information Principles or Fair Information Practices.13
Figure 2.2 presents such a list of principles. Informed consent and restrictions on sec- ondary uses show up in the first and third principles. You will rarely see the last point in Figure 2.2 included among Fair Information Principles, but I consider it an impor- tant one. Some companies and organizations turn over personal data to law enforcement agents and government agencies when requested. Some do so only if presented with a subpoena or other court order. Some challenge subpoenas; some do not. Some inform their customers or members when they give personal data to the government; some do not. The entity that holds the data decides how far to go in protecting the privacy of its members or customers. The individual whose data the entity might release is rarely aware of the government request. Thus, the entities that hold the data have a responsibility to those people. Planning ahead for various possible scenarios, developing a policy, and an- nouncing it (and following it) are all part of responsible management of other people’s personal data.
60 Chapter 2 Privacy
1. Inform people when you collect information about them, what you collect, and how you use it.
2. Collect only the data needed.
3. Offer a way for people to opt out from mailing lists, advertising, and other secondary uses. Offer a way for people to opt out from features and services that expose personal information.
4. Keep data only as long as needed.
5. Maintain accuracy of data. Where appropriate and reasonable, provide a way for people to access and correct data stored about them.
6. Protect security of data (from theft and from accidental leaks). Provide stronger protection for sensitive data.
7. Develop policies for responding to law enforcement requests for data.
Figure 2.2 Privacy principles for personal information.
Many businesses and organizations have adopted some version of Fair Information Practices. Laws in the United States, Canada, and European countries (among others) require them in many situations. These principles are reasonable ethical guidelines. How- ever, there is wide variation in interpretation of the principles. For example, businesses and privacy advocates disagree about what information businesses “need” and for how long.
It can be difficult to apply the fair information principles to some new technologies and applications. They do not fully address privacy issues that have arisen with the increase of cameras in public places (such as police camera systems and Google’s Street View), the enormous amount of personal information people share in social networks, and the ubiquity and power of smartphones. For example, when someone puts personal information in a tweet to thousands of people, how do we determine the purpose for which he or she supplied the information? Can any recipient use the information in any way? How widely distributed must information be before it is public in the sense that anyone can see or use it? Even when people have agreed to share information, consequences of new
Employers search employee social media: Section 6.3.1
ways of sharing or new categories of information can be unexpected and problematic. For example, in Section 2.3.2 we discuss default settings for features in social networks that have significant consequences.
2.2 The Fourth Amendment, Expectation of Privacy, and Surveillance Technologies
In George Orwell’s dystopian novel 1984 , Big Brother (the government) could watch everyone via “telescreens” in all homes and public places. There was little crime and little
2.2 The Fourth Amendment, Expectation of Privacy, and Surveillance Technologies 61
political dissent—and no love and no freedom. Today, the government does not have to watch every move we make, because so many of our activities leave data trails in databases available to government agencies.� When Big Brother wants to take a direct look at us and our activities, he uses sophisticated new surveillance tools. In this section, we consider the impact of these tools on privacy and look into their compatibility with constitutional and legal protections from government intrusions.
2.2.1 The Fourth Amendment
The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.
—Fourth Amendment, U.S. Constitution
The U.S. Constitution protects a right to privacy from government intrusion, most explicitly in the Fourth Amendment. The U.S. Supreme Court has interpreted other parts of the Bill of Rights to provide a constitutional right to privacy from government in other areas as well. England has a similar tradition, as expressed in William Pitt’s colorful statement in 1763:
The poorest man may in his cottage bid defiance to all the force of the Crown. It may be frail; its roof may shake; the wind may blow through it; the storms may enter; the rain may enter—but the King of England cannot enter . . . .14
Here, we look at how databases, surveillance technology, and popular consumer gadgets threaten this right. Although the discussion in this section is in the context of the U.S. Fourth Amendment and U.S. Supreme Court rulings, the new technological risks of intrusion by governments are similar in other countries.
The Fourth Amendment sets limits on the government’s rights to search our homes and businesses and to seize documents and other personal effects. It requires that the government have probable cause for the search and seizure. That is, there must be good evidence to support the specific search. Two key problems arise from new technologies. First, much of our personal information is no longer safe in our homes or the individual offices of our doctors and financial advisors. We carry a huge amount of personal infor- mation on smartphones and laptops. Much personal information is in huge databases outside of our control. Many laws allow law enforcement agencies to get information from nongovernment databases without a court order. Federal privacy rules allow law en- forcement agencies to access medical records without court orders. The USA PATRIOT
� The use of myriad personal-data systems to investigate or monitor people is sometimes called dataveillance, short for “data surveillance.”
62 Chapter 2 Privacy
Act (passed after the terrorist attacks in 2001) eased government access to many kinds of personal information, including library and financial records, without a court order. The second factor weakening Fourth Amendment protections is that new technologies allow the government to search our homes without entering them, to search our persons from a distance without our knowledge, and to extract all the data on a cellphone (including deleted data and password protected data) in less than two minutes at a traffic stop.
As we consider all the personal information available to government agencies now, we can reflect on the worries of Supreme Court Justice William O. Douglas about the potential abuse from government access to only the records of someone’s checking account. In 1974, he said:
In a sense a person is defined by the checks he writes. By examining them agents get to know his doctors, lawyers, creditors, political allies, social connections, religious affiliation, educational interests, the papers and magazines he reads, and so on ad infinitum. These are all tied in to one’s social security number, and now that we have the data banks, these other items will enrich that storehouse and make it possible for a bureaucrat—by pushing one button—to get in an instant the names of the 190 million Americans who are subversives or potential and likely candidates.15
Today’s readers should not miss the irony of the last sentence: 190 million was almost the entire population of the United States at the time.
With each new data storage or search technology, law enforcement agencies and civil libertarians argue the question of whether the Fourth Amendment applies. In the next few sections, we discuss such technologies and some principles the Supreme Court has established.
When the American Republic was founded, the framers established a libertarian equilibrium among the competing values of privacy, disclosure, and surveillance. This balance was based on technological realities of eighteenth-century life. Since torture and inquisition were the only known means of penetrating the mind, all such measures by government were forbidden by law. Physical entry and eavesdropping were the only means of penetrating private homes and meeting rooms; the framers therefore made eavesdropping by private persons a crime and allowed government to enter private premises only for reasonable searches, under strict warrant controls. Since registration procedures and police dossiers were the means used to control the free movement of “controversial” persons, this European police practice was precluded by American governmental practice and the realities of mobile frontier life.
—Alan F. Westin, Privacy and Freedom16
2.2 The Fourth Amendment, Expectation of Privacy, and Surveillance Technologies 63
2.2.2 New Technologies, Supreme Court Decisions, and Expectation of Privacy
The principles laid down in this opinion . . . apply to all invasions on the part of government and its employees of the sanctity of a man’s home and the privacies of life. It is not the breaking of his doors, and the rummaging in his drawers, that constitutes the essence of the offense; but it is the invasion of his indefeasible right of personal security, personal liberty and private property.
—Justice Joseph Bradley, Boyd v. United States, 1886.
“Noninvasive but deeply revealing” searches
The title above is from Julian Sanchez’s description of a variety of search and detection technologies.17 Many sound like science fiction; they are not. These technologies can search our homes and vehicles but do not require police to physically enter or open them. They can search our bodies beneath our clothes from a distance without our knowledge. What restrictions should we place on their use? When should we permit government agencies to use them without a search warrant?
Noninvasive but deeply revealing search tools (some in use and some in development) include particle sniffers that detect many specific drugs and explosives, imaging systems that detect guns under clothing from a distance, devices that analyze the molecular composition of truck cargo without opening the truck, thermal-imaging devices (to find heat lamps for growing marijuana, for example), and devices that locate a person by locating his or her cellphone. These devices have obvious valuable security and law enforcement applications, but the technologies can be used for random searches, without search warrants or probable cause, on unsuspecting people. As Sanchez points out, we live “in a nation whose reams of regulations make almost everyone guilty of some violation at some point.”18 Before the government begins using these tools on, say, ordinary people bringing medications home from Canada, making their own beer, or keeping a banned sweetener or saturated fat in their home (or whatever might be illegal in the future), it is critical for privacy protection that we have clear guidelines for their use— and, in particular, clarification of when such use constitutes a search requiring a search warrant.
Supreme Court decisions and expectation of privacy
Several Supreme Court cases have addressed the impact of earlier technology on Fourth Amendment protection. In Olmstead v. United States,19 in 1928, the government had used wiretaps on telephone lines without a court order. The Supreme Court allowed the wiretaps. It interpreted the Fourth Amendment to apply only to physical intrusion and only to the search or seizure of material things, not conversations. Justice Louis Brandeis
64 Chapter 2 Privacy
dissented, arguing that the authors of the Fourth Amendment did all they could to protect liberty and privacy—including privacy of conversations—from intrusions by government based on the technology available at the time. He believed that the court should interpret the Fourth Amendment as requiring a court order even when new technologies give the government access to our personal papers and conversations without entering our homes. In Katz v. United States, in 1967, the Supreme Court reversed its position and ruled that the Fourth Amendment does apply to conversations and that it applies in public places in some situations. In this case, law enforcement agents had attached an electronic listening and recording device on the outside of a telephone booth to record a suspect’s conversation. The court said that the Fourth Amendment “protects people, not places,” and that what a person “seeks to preserve as private, even in an area accessible to the public, may be constitutionally protected.” To intrude in places where a reasonable person has a reasonable expectation of privacy, government agents need a court order.
Although the Supreme Court’s decision in Katz v. United States strengthened Fourth Amendment protection in some ways, there is significant risk in relying on reasonable “expectation of privacy” to define the areas where law enforcement agents need a court order. Consider the two technologies in the box nearby. One tracks private actions in public view; the other tracks people in private places.
As well-informed people come to understand the capabilities of modern surveillance tools, we might no longer expect privacy from government, in a practical sense. Does that mean we should not have it? The Supreme Court recognized this problem in Smith v. Maryland , in which it noted that, if law enforcement reduces actual expectation of privacy by actions “alien to well-recognized Fourth Amendment freedoms,” this should not reduce our Fourth Amendment protection. However, the Court has interpreted “expectation of privacy” in a very restrictive way. For example, it ruled that if we share information with businesses such as our bank, then we have no reasonable expectation of privacy for that information (United States v. Miller , 1976). Law enforcement agents do not need a court order to get the information. This interpretation seems odd. We do expect privacy of the financial information we supply a bank or other financial institution. We expect confidentiality in many kinds of information we share with a few, sometimes carefully selected, others. We share our Web activity with ISPs, websites, and search engine companies merely by typing and clicking. We share many kinds of personal information at specific websites where we expect it to be private. Is it safe from warrantless search?
In Kyllo v. United States (2001), the Supreme Court ruled that police could not use thermal-imaging devices to search a home from the outside without a search warrant. The Court stated that where “government uses a device that is not in general public use, to explore details of the home that would previously have been unknowable without physical intrusion, the surveillance is a ‘search.’” This reasoning suggests that when a technology becomes more widely used, the government may use it for surveillance without a warrant.
2.2 The Fourth Amendment, Expectation of Privacy, and Surveillance Technologies 65
Tracking cars and cellphones
Law enforcement agents track thousands of people’s locations each year. Sometime they have a court order to do so, and some times they do not. Do they need one? We describe two key cases as examples.
In 2012, the Supreme Court decided U.S. v. Jones, its first major case of digital technol- ogy surveillance. Does the Fourth Amendment prohibit police from secretly attaching a GPS tracking device to a person’s vehicle without a search warrant? The police said no; they could have observed the suspect’s car as it moved about on public streets. They argued the GPS device is a labor-saving device. The Court disagreed. There are two arguments in favor of Fourth Amendment protection in this case. First, a vehicle is one of a person’s “ef- fects” that the Fourth Amendment explicitly protects. Second, tracking a person’s location for a month, 24 hours a day, as in this case, goes beyond someone observing the car pass by in public; it violates a person’s expecta- tion of privacy. The Court agreed (unani- mously) with the first argument. Police need a search warrant to attach a surveillance device to a private vehicle. The justices recognized that expectation of privacy would be a key issue in tracking cases where directly attach- ing a device is not necessary, but the majority
chose to leave a decision about that to future cases.∗20
The police had one argument against ex- pectation of privacy in U.S. v. Jones: the vehicle drove around in public view. Suppose a person is at home, at a friend’s or lover’s home, inside a church or a health facility, or in any private space. Law enforcement agencies use a device to locate a person by locating his or her cellphone, even when the person is not actively using the phone.† Police do not need to enter private premises or physically attach anything to a per- son’s property. Thus, expectation of privacy is a key issue here. Law enforcement agenies argue that cellphone tracking (which they have used more than 1000 times, according to a Wall Street Journal investigation) does not require a search warrant because a person who uses a cellphone service has no expectation of privacy about the location data the phone transmits to cell towers. This view might surprise most cellphone owners. The Supreme Court has not yet heard a case about this technology.
* Four justices wrote an opinion that the tracking also violated expectation of privacy. † The device pretends to be a cell tower. Agents drive around with it and get the target phone to connect to it in several locations. They then triangulate on the phone from the data the device collects.
This standard may allow time for markets, public awareness, and technologies to develop to provide privacy protection against the new technology. Is it a reasonable standard— a reasonable adaptation of law to new technology? Or should the court have permitted the search? Or should the government have to satisfy the requirements of the Fourth Amendment for every search of a home where a warrant would have been necessary before the technology existed?
66 Chapter 2 Privacy
Our use of these new technologies doesn’t signal that we’re less interested in privacy. The idea of the government monitoring our whereabouts, our habits, our acquaintances, and our interests still creeps us out. We often just don’t know it’s going on until it’s too late.
—Judge Alex Kozinski21
2.2.3 Search and Seizure of Computers and Phones
Privacy in group association may . . . be indispensable to preservation of freedom of association, particularly where a group espouses dissident beliefs.
—The Supreme Court, ruling against the state of Alabama’s attempt to get the membership list of the National Association for the Advancement of Colored People (NAACP) in the 1950s22
The NAACP’s membership list was not on a computer in the 1950s. It undoubtedly is now. We consider several issues about how the Fourth Amendment applies to searches of computers, phones, and other electronic devices. How far does a search warrant extend when searching a computer? When is a search warrant needed?
The Fourth Amendment requires that search warrants be specific about the object of the search or seizure. Courts traditionally take the view that if an officer with a warrant sees evidence of another crime in plain view, the officer may seize it and prosecutors may use it. But the amount of information or evidence that might be in plain view in a house or office is small compared to what is on a computer. A computer at a business will have information about a large number of people. Membership lists, business records, medical records, and myriad other things can be on the same computer that law enforcement agents may search with a search warrant for specific, limited items. Access by law enforcement agents to all the data on a computer or device can be a serious threat to privacy, liberty, and freedom of speech.
How should we interpret “plain view” for a search of computer or smartphone files? A broad interpretation—for example, “all unencrypted files”—invites abuse. Agents could get a warrant for a small crime for which they have supporting evidence, and then go on fishing expeditions for other information. This thwarts the Fourth Amendment’s requirement that a warrant be specific. In one case, while searching a man’s computer with a search warrant for evidence of drug crimes, an officer saw file names suggesting illegal content not related to the warrant. He opened files and found child pornography. An appeals court said the names of files might be considered to be in plain view, but the contents of the files were not.23 Although the crime in this case is a very unpleasant one, the principle protects us from abuses by the police.
2.2 The Fourth Amendment, Expectation of Privacy, and Surveillance Technologies 67
In an investigation of the use of performance-enhancing drugs by professional baseball players, law enforcement agents obtained a search warrant for computer files of laboratory records on drug tests for 10 specific players. The lab files they seized contained records on hundreds of baseball players, hockey players, and ordinary people who are not athletes. The agents found that more than 100 baseball players tested positive for steroid use. This case received much attention in the news when the names of prominent players who allegedly tested positive leaked to the news media. A federal appeals court ruled that the information on all but the original 10 players was beyond the scope of the search warrant and the government was wrong to seize it.24
Suppose law enforcement agents have a search warrant for a computer but find that the files are encrypted. Must the owner supply the encryption key? The Fifth Amendment to
More about encryption: Section 2.5.1
the U.S. Constitution specifies that a person cannot be forced to testify against himself. However, courts sometimes allow the government to require a person to provide keys or combinations to a safe. Rulings
in federal courts have been inconsistent about whether such a requirement can apply to encryption keys. (In many cases, law enforcement agents decrypt the files by other means.)
What happened to the Fourth Amendment? Was it repealed somehow?
—A judge, commenting on the seizure of lab records for drug tests25
Phones and laptops
A mobile phone might contain contacts, numbers for calls made and received, email, text messages, documents, personal calendars, photos, a history of Web browsing, and a record of where the phone has been. For many people, the phone is a traveling office, containing proprietary and confidential information. A lawyer’s phone might contain information about clients and cases—legally protected from access by police.
Police may search an arrested person (without a search warrant) and examine personal property on the person (in pockets, for example) or within his or her reach. Is a search warrant required before the police can search the contents of the person’s cellphone? Should a search warrant be required?
This seems like a classic “no-brainer.” The vast collection of information on a cell- phone is the kind of information the Fourth Amendment is intended to protect. A judge who ruled against a cellphone search said the justifications for permitting police to search an arrested person were to find and take weapons and to prevent the person from hiding or destroying evidence. Once the police have custody of a phone, it is safe from destruction and police must wait until they have a search warrant before retrieving information from the phone. The Ohio Supreme Court ruled that searching an arrested person’s phone
68 Chapter 2 Privacy
without a search warrant is unconstitutional:� people have an expectation of privacy for the contents of their phones.26
But the California Supreme Court ruled otherwise. It said that search of the contents of a cellphone was permitted because the phone was personal property found on the arrested person. Police have searched cellphones taken from arrested people in dozens of cases without warrants. Eventually, a case raising this issue will be heard by the U.S. Supreme Court. The result will have profound implications for privacy. In the meantime, lawyers suggest leaving a cellphone out of reach while driving.
Customs and border officials search luggage when U.S. citizens return from another country and when foreigners enter the United States. Border officials search, and some- times seize, laptops and phones of journalists, businesspeople, and other travelers. Is searching a laptop equivalent to searching luggage? Or, because of the amount and kind of personal information they contain, does searching them at the border require reason- able suspicion of a crime? A federal appeals court ruled that customs agents do not need reasonable suspicion of a crime to search laptops, phones, and other electronic devices. Lawsuits and debate on the issue are ongoing.27
2.2.4 Video Surveillance and Face Recognition
We are used to security cameras in banks and convenience stores. They help in investiga- tions of crimes. Prisons use video surveillance systems for security. Gambling casinos use them to watch for known cheaters. Video surveillance systems monitor traffic and catch drivers who run red lights. In these cases, people are generally aware of the surveillance. After the 2001 terrorist attacks, the police in Washington, D.C., installed cameras that zoom in on individuals a half mile away.
Cameras alone raise some privacy issues. When combined with face recognition systems, they raise even more. Here are some applications of cameras and face recognition and some relevant privacy and civil liberties issues.
In the first large-scale, public application of face recognition, police in Tampa, Florida, scanned the faces of all 100,000 fans and employees who entered the 2001 Super Bowl (causing some reporters to dub it Snooper Bowl). The system searched computer files of criminals for matches, giving results within seconds. People were not told that their faces were scanned. Tampa installed a similar system in a neighborhood of popular restaurants and nightclubs. Police in a control room zoomed in on individual faces and checked for matches in their database of suspects.28 In two years of use, the system did not recognize anyone that the police wanted, but it did occasionally identify innocent people as wanted felons.
� The court allowed for exceptions in certain kinds of emergencies.
2.2 The Fourth Amendment, Expectation of Privacy, and Surveillance Technologies 69
The ACLU compared the use of the face recognition system at the Super Bowl to a computerized police lineup to which innocent people were subject without their knowledge or consent. Face recognition systems had a poor accuracy rate in the early 2000s,29 but the technology improved, along with the availability of photos to match against (tagged photos in social networks, for example). A police officer can now snap a photo of a person on the street and run a cellphone app for face recognition. (Another app scans a person’s iris and collects fingerprints.)
Some cities have increased their camera surveillance programs, while others gave up their systems because they did not significantly reduce crime. (Some favor better lighting and more police patrols—low tech and less invasive of privacy.) Toronto city officials refused to let police take over their traffic cameras to monitor a protest march and identify its organizers. In a controversial statement, the Privacy Commissioner of Canada argued that the country’s Privacy Act required a “demonstrable need for each piece of personal information collected” to carry out government programs and therefore recording activities of large numbers of the general public was not a permissible means of crime prevention.30
England was the first country to set up a large number of cameras in public places to deter crime. There are millions of surveillance cameras in Britain. A study by a British university found a number of abuses by operators of surveillance cameras, including collecting salacious footage, such as people having sex in a car, and showing it to colleagues. Defense lawyers complain that prosecutors sometimes destroy footage that might clear a suspect.31 Enforcing a curfew for young people is one of the uses of public cameras in England. This application suggests the kind of monitoring and control of special populations the cameras make easy. Will police use face recognition systems to track political dissidents, journalists, political opponents of powerful people—the kinds of people targeted for illegal or questionable surveillance in the past? In 2005, the British government released a report saying Britain’s closed-circuit TV systems were of little use in fighting crime. The only successful use of the cameras was in parking lots where they helped reduce vehicle crime.32 Later that year, photos taken by surveillance cameras helped identify terrorists who set off bombs in the London subway. After rioters burned and looted neighborhoods in England in 2011, police used recordings from street cameras and face recognition systems to identify rioters. It is rare for all the facts or strong arguments to support only one side of an issue. What trade-offs between privacy and identifying criminals and terrorists are we willing to make?
The California Department of Transportation photographed the license plates on cars driving in a particular area. Then it contacted the car owners for a survey about traffic in the area. Hundreds of drivers complained. These people objected vehemently to what they considered unacceptable surveillance by a government agency even when the agency photographed only their license plates, not their faces—for a survey, not a police action. Many ordinary people do not like being tracked and photographed without their knowledge.
70 Chapter 2 Privacy
Clearly, some applications of cameras and face recognition systems are reasonable, beneficial uses of the technology for security and crime prevention. But there is a clear need for limits, controls, and guidelines. How should we distinguish appropriate from in- appropriate uses? Should international events such as the Olympics, which are sometimes terrorist targets, use such systems? Should we restrict technologies such as face recognition systems to catching terrorists and suspects in serious crimes, or should we allow them in public places to screen for people with unpaid parking tickets? Do people have the right to know when and where cameras are in use? In the United States, police must have a reason for requiring a person to be fingerprinted. Should similar standards apply to their use of face recognition and iris scanning? If we consider these issues early enough, we can design privacy-protecting features into the technology, establish well-thought-out policies for their use, and pass appropriate privacy-protecting legislation before, as the Supreme Court of Canada worries in the quote below, “privacy is annihilated.”
To permit unrestricted video surveillance by agents of the state would seriously diminish the degree of privacy we can reasonably expect to enjoy in a free society. . . . We must always be alert to the fact that modern methods of electronic surveillance have the potential, if uncontrolled, to annihilate privacy.
—Supreme Court of Canada.33
This is a public meeting!
—Reporter Pete Tucker, upon his arrest for taking a photo with his cellphone at an open meeting of a U.S. government agency. Newsman Jim Epstein was then arrested for recording the arrest of Tucker on his own phone.34
2.3 The Business and Social Sectors
2.3.1 Marketing and Personalization
Acxiom provides complete and accurate pictures of customers and prospects, powering all marketing and relationship efforts.
—Acxiom website35
Marketing is an essential task for most businesses and organizations. It is one of the biggest uses of personal information—by businesses, political parties, nonprofit organizations, and advocacy groups. Marketing includes finding new customers, members, or voters
2.3 T
he B
usiness and
SocialSectors 71
72 Chapter 2 Privacy
Data mining and clever marketing36
Customers of the British retailing firm Tesco permit the company to collect information on their buying habits in exchange for discounts. The company identifies young adult males who buy diapers and sends them coupons for beer— assuming that, with a new baby, they have less time to go to a pub.
Target beats that. Target’s data miners ana- lyzed purchases of women who signed up for baby registries. They discovered that pregnant women tend to increase their purchases of a group of 25 products. So if a woman starts buying more of several of those products (e.g., unscented lotions and mineral supplements), Target starts sending coupons and ads for preg-
nancy and baby products. It can even time them for stages of the pregnancy.
To compete with Wal-Mart, Tesco aimed to identify those customers who were most price conscious and hence most likely to be at- tracted to Wal-Mart’s low prices. By analyzing purchase data, the company determined which customers regularly buy the cheapest version of products that are available at more than one price level. Then they determined what prod- ucts those customers buy most often, and they set prices on those products below Wal-Mart’s.
Are these examples of desirable competition or scary intrusiveness and manipulation of consumers?
and encouraging old ones to continue. It includes advertising one’s products, services, or cause. It includes how to price products and when and to whom to offer discounts.
Through most of the 20th century, businesses sent out catalogs and advertisements based on a few criteria (age, gender, and neighborhood, for example). Computers and the increased storage capacity of the 1980s and 1990s began a revolution in targeted market- ing. Now, businesses store and analyze terabytes of data, including consumer purchases, financial information, online activity, opinions, preferences, government records, and any other useful information to determine who might be a new customer and what new prod- ucts and services an old customer might buy. They analyze thousands of criteria to target ads both online and offline. Online retailers make recommendations to you based on your prior purchases and on those of other people with similar buying patterns. Websites greet us by name and present us with options based on prior activity at that site.
To many, the idea that merchants collect, store, and sell data on their purchasing habits is disturbing. These activities impinge upon a key aspect of privacy: control of information about oneself. Privacy advocates and some consumers object to advertising based on consumer purchase histories and online activity. Marketers argue that finely targeted marketing is useful to the consumer and that it reduces overhead and, ultimately, the cost of products. L.L. Bean, a big mail-order business, says it sends out fewer catalogs as it does a better job of targeting customers. A Web ad company said users clicked on 16% of ads displayed based on the user’s activity profile—many more than the 1% typical for untargeted Web ads. Another firm says that 20–50% of people used the personalized coupons it provided on screen or by email, compared with the 1–5% redemption rate for
2.3 The Business and Social Sectors 73
newspaper inserts. The companies say targeting ads via personal consumer information reduces the number of ads overall that people will see and provides ads that people are more likely to want.37 Many people like the personalization of ads and recommendations. Targeting is so popular with some people that Google advertised that its Gmail displays no untargeted banner ads.
Some kinds of less obvious personalization trouble people more (when they learn of them). The displays, ads, prices, and discounts you see when shopping online might be different from those others see. Some such targeting is quite reasonable: A clothing site does not display winter parkas on its home page for a shopper from Florida. Some sites offer discounts to first-time visitors. Some display any of hundreds of variations of a page depending on time of day, gender, location, and dozens of other attributes of a person’s session. (Some sites guess a visitor’s gender based on clicking behavior.38) If a person hesitates over a product, a site might offer something extra, perhaps free shipping. Is this collection and use of behavioral information an example of inappropriate invisible information gathering? When we shop in stores, sales clerks can see our gender and our approximate age. They can form other conclusions about us from our clothing, conversation, and behavior. Good salespeople in expensive specialty stores, car dealerships, flea markets, and third-world street markets make judgments about how much a potential customer will pay. They modify their price or offer extras accordingly. Is the complex software that personalizes shopping online merely making up for the loss of information that would be available to sellers if we were shopping in person? Are some people uneasy mainly because they did not realize that their behavior affected what appears on their screen? Are people uneasy because they did not realize that websites can determine (and store) so much about them when they thought they were browsing anonymously? Is the uneasiness something we will get over as we understand the technology better? Or are there privacy threats lurking in these practices?
Companies can use face recognition systems in video game consoles and televisions to target ads to the individual person who is playing a game or watching TV. What risks to privacy does this entail? Is it unethical to include such features? Will most people come to like the customization? Do they understand that if they see ads targeted to their interests, someone somewhere is storing information about them?
Our examples so far have been commercial situations. The Democratic and Repub- lican parties use extensive databases on tens of millions of people to profile those who might vote for their candidates. The parties determine what issues to emphasize (and which to omit) in personalized campaign pitches. The databases include hundreds of de- tails such as job, hobbies, type of car, and union membership.39 One party might send a campaign flyer to a conservative union member that emphasizes its labor policy but does not mention, say, abortion, while another party might do the opposite.
The issue is informed consent
Technological and social changes make people uncomfortable, but that does not mean the changes are unethical. Some privacy advocates want to ban all advertising targeted by
74 Chapter 2 Privacy
online behavior. It should be clear that targeted or personalized marketing is not, in itself, unethical. Most of the legitimate concern has to do with how marketers get the data they use. In some cases there is consent, in some there is not, and in many the complexity of the situation makes consent unclear.
Collection of consumer data for marketing without informing people or obtaining their consent was widespread, essentially standard practice, until roughly the late 1990s. Sometimes, small print informed consumers, but often they did not see it, did not understand the implications, or ignored it. Gradually, public awareness and pressure for improvement increased, and data collection and distribution policies improved. Now websites, businesses, and organizations commonly provide explicit, multi-page statements about what information they collect and how they use the information. They provide opt- out and opt-in options. (Federal laws and regulations require specific privacy protections for financial and medical information.40) There are still many companies that get it wrong, whether out of lack of concern for people’s privacy or by misjudging what people want. There is also a vast world of data collection over which we have little or no direct control. When someone consents to a company’s use of his or her consumer information, the person probably has no idea how extensive the company is and how far the data could travel. Firms such as Acxiom (quoted at the beginning of this section), a large international database and direct-marketing company, collect personal data from a huge number of online and offline sources. Such companies that maintain huge consumer databases buy (or merge with) others, combining data to build more detailed databases and dossiers. They sell data and consumer profiles to businesses for marketing and “customer management.” Most people do not know such firms exist.
Extensive and hidden tracking of online activity led to calls for a “Do Not Track” but- ton in browsers. The exact meaning and effects of such buttons are yet to be determined. The idea is that users would have one clear place to indicate that they do not want their Web activity tracked and stored. Many advertisers, providers of popular Web browsers, and large Internet companies agreed to implement and comply with some version of Do Not Track.
Awareness varies among consumers, and many do not read privacy policies. Is it the user’s responsibility to be aware of the data collection and tracking policies of a site he or she visits? Does a person’s decision to interact with a business or website constitute implicit consent to its posted data collection, marketing, and tracking policies? How clear, obvious, and specific must an information-use policy be? How often should a site that runs (or allows third parties to run) tracking software remind users? Some people who allow extensive tracking and information collection might later regret specific decisions they made. Whose responsibility is it to protect them? Can we protect them without eliminating options for the people who use them sensibly? Potentially negative future consequences of choices we make now (such as not getting enough exercise) are common in life. We can educate consumers and encourage responsible choices. (At the end of the chapter, we list nonprofit organizations that help do this.) Respect for people’s autonomy
2.3 The Business and Social Sectors 75
means letting them make their own choices. Designing systems ethically and responsibly means including ways to inform and remind users of unobvious data collection, of changes in policies or features, and of risks.
Paying for consumer information
When businesses first began building extensive consumer databases, some privacy ad- vocates argued that they should pay consumers for use of their information. In many circumstances, they did (and do) pay us indirectly. For example, when we fill out a con- test entry form, we trade data for the opportunity to win prizes. Many businesses give discounts to shoppers who use cards that enable tracking of their purchases. Many offer to trade free products and services for permission to send advertising messages or to collect information. Some privacy advocates criticize such programs. Lauren Weinstein, founder of Privacy Forum, argues that among less affluent people the attraction of free services may be especially strong, and it “coerces” them into giving up their privacy.41 People do not understand all the potential uses of their information and the long-term consequences of the agreements. On the other hand, such programs offer an opportunity for people with little money to trade something else of value (information) for goods and services they desire. Free-PC started the trend, in 1999, with its offer of 10,000 free PCs in ex- change for providing personal information and watching advertising messages. Hundreds of thousands of people swamped the company with applications in the first day.
In any case, these early programs are dwarfed by the development of social networking, free video sites, and a huge number of other websites that provide information and services for free. People understand that advertising funds them. Gmail targets ads to individual users by analyzing the user’s email messages. Some privacy advocates were horrified: it reads people’s email! In exchange for permission to do so, Gmail provides free email and other services. Millions of people signed up. The success of these businesses and services shows that many people do not object to retailers using their purchase history or email and do not consider the intrusion of online ads to be extremely bothersome, nor their Web surfing to be particularly sensitive. Do they understand the potential consequences?
2.3.2 Our Social and Personal Activity
Broadcast Yourself.
—Slogan on YouTube’s home page42
Social networks—what we do
There are two aspects of social networks to consider: our own responsibility for what we share (how we risk our privacy and that of our friends) and the responsibilities of the companies that host our information.
76 Chapter 2 Privacy
Many young people post opinions, gossip, and pictures that their friends enjoy. Their posts might cause trouble if parents, potential employers, law enforcement agents, or various others see them. An 18-year-old who posts sexy photos of herself in bathing suits is thinking about her friends viewing them, not potential stalkers or rapists. People who try to clean up their online personas before starting a job search find that it is hard to eliminate embarrassing material. Some social network apps ask for personal information—such as religion, political views, and sexual orientation—about one’s friends as well as oneself. Do people think about how the information might be used and whether their friends would like it disclosed?
Why was it for so long standard practice to stop mail and newspaper delivery when going away on a trip? This one detail about location (“away from home”) was important to protect from potential burglars. Yet, now, a great many people post their location (and that of their friends) to social networks.
Social networkers, with hundreds or thousands of network friends they never met, probably do not give enough thought to the implications of the personal information they make available. When someone initially chooses privacy settings, will that person later remember who is getting real-time reports on his or her status and activities?
Government agencies and businesses do many things wrong, but individuals also do not always exercise appropriate thought and care for their own privacy, future, and safety.
Polls show that people care about privacy. Why don’t they act that way?
—Ian Kerr
Social networks—what they do
We use Facebook for our examples here because it has so many features and so many members, and because it has made instructive mistakes. The principles apply to other social media and other websites.
Facebook regularly introduces new services, new ways to share with friends and stay up-to-date on their activities. Several times, Facebook seriously misjudged how members would react and made poor choices. Some of the examples we describe quickly generated storms of criticism from tens of thousands to hundreds of thousands of members as well as from privacy advocates.
News feeds send recent changes in a member’s personal information, friends list, and activities to that member’s friends.44 Facebook said it did not change any privacy settings when it introduced the feeds. It sends the information only to people the members had already approved and who could already see it if they looked. Within a day or two, hundreds of thousands of Facebook members protested vehemently. Why? The ease of accessing information can sometimes be more important than the fact that it is available somewhere. Many people do not check on their hundreds of friends regularly. The feeds, however, spread information to everyone instantly. Here is just one kind of instance where
2.3 The Business and Social Sectors 77
it makes a difference: In the physical world, we might share information about the end of a relationship, a serious illness, or a family problem with a few, chosen, close friends. Gradually, as we adjust to the new situation, others might learn of the event. The feeds remove the emotionally protective delay.
When Facebook began telling members about purchases their friends made, problems ranged from spoiling surprise gifts to embarrassing and worrisome disclosures. Should Facebook introduce such features turned “on” for everyone? Or should the company announce them and let members opt in with a click? When Facebook introduced a face recognition tool to help members tag friends in photos, the default was that the tool was on for all members. There was a way to opt-out, but many users were not aware of the new feature, so they did not know to opt out. Facebook’s Places feature lets users tag friends who are at their location (whether or not the friend is actually there). What should the default settings be?
Angry members are not good for business. These incidents demonstrate the im- portance, from both an ethical perspective and a business perspective, of giving careful thought to the implications and risks of new features and the selection of default settings. Changes that might seem small and subtle can have big impacts on people’s perceptions of privacy, on risk, and on feelings of comfort. People might be happy if a few friends tag them in a few photos, but they might be very uneasy if an automated system tags every photo they appear in. Quantity can make a difference in perceived quality (in particular, in one’s feeling of control of information about oneself ). In complex environments, such as social networks with their many features and members, an opt-in policy is preferable— that is, a policy where members must explicitly turn the feature on, or else it remains off. In complex environments, it is also valuable to have a range of options. For example, for a tagging feature (for location or photos), options can include informing the person and allowing removal of the tag, requesting permission for each tag before it goes live, and allowing a member to completely opt out of being tagged. (Facebook modified Places to include a range of levels of protection.)
According to the Federal Trade Commission (FTC), Facebook violated its stated policies in several instances: by giving users’ IDs to advertisers along with data on user activity, by allowing third-party apps full access to member personal data, and by failing to delete some of a member’s data when the member deleted the account. Such actions, in violation of a company’s own statements about its practices, are deceptive; they thwart informed decisions and agreements. We might dislike, denounce, debate, and disagree about the ethics of some data practices. Deceptive practices are more clearly unethical (or unethical at a stronger level) than mistakenly or carelessly making poor choices about defaults.
Responsibility of free services
We should appreciate the astounding amount of free service available to us from social network companies—as well as search engines, communication systems such as Twitter, websites full of expert information, and so on. We can choose to use them or not. At
78 Chapter 2 Privacy
the same time, the businesses that run these free services have a responsibility to their users. If you invite your neighbors to use your car anytime they wish without asking, you have an ethical responsibility not to leave the keys in the car when the brakes are not working. It does not matter that you do not charge a fee. Companies may not, ethically, offer attractive services and then cause a significant risk of harm, especially when the risk is hidden or unexpected.
Life in the clouds
Soon after a woman started writing a personal blog, she discovered that someone she had not seen in years read it. This horrified her. Perhaps she thought only people to whom she gave the Web address read the blog. She did not realize that it showed up high in search results for her name.45 Another woman liked the feature on a social network site that told her which members read her profile. She was surprised and upset to find that people whose profiles she read knew that she read them. After Facebook suggested that two women might want to be friends, one of them discovered that they were both married to the same man.
The first incident reminds us that some people do not know or understand enough about how the Web works to make good decisions about what to put there.� The second indicates that some people do not think carefully about it. It also illustrates a very common phenomenon: people often want a lot of information about others, but they do not want others to have access to the same kinds of information about themselves. The bigamist did not realize that Facebook would notice his two wives had something in common.
Some people include their birth date in online profiles or in résumés they post on job- hunting sites. Genealogy sites are very popular. People create family trees with complete profiles of family members, including birth dates and mother’s maiden name. Medical and financial institutions used this same information (birth dates and mother’s maiden name) to verify a customer’s identity. We can change a disclosed password; we cannot change our birth date or mother’s maiden name.
The Web is public. Most people are decent and harmless, but many are evil and dangerous. Pedophiles have websites that link to sites of Cub Scouts, Brownies (the young version of Girl Scouts), junior high school soccer teams, and so on—sites with pictures of children and sometimes names and other personal information. That is scary. It does not mean that such organizations should not put pictures on their websites. It suggests, however, that they consider whether to include children’s names, whether to require registration for use of the site, and so on.
Years ago, when many homes had answering machines connected to telephones, some people, instead, used answering services. Messages left for them resided on recording machines at the service’s business site. I recall my surprise that people were comfortable having their personal messages on machines outside their control. How quaint and old-
� In an unusual example of initiative, the woman studied the techniques used to rank search results and modified her blog so that it no longer showed up prominently in searches for her name.
2.3 The Business and Social Sectors 79
fashioned that concern seems now. Our cellphone and email messages routinely reside on computers outside our home or office. Text messages are retrievable months later. After many incidents of exposure of embarrassing messages, we still see individuals, politicians, lawyers, celebrities, and businesspeople writing sensitive, rude, or compromising things in email, text, and tweets with the apparent belief that no one but the intended recipient will ever see them.
Millions of Americans prepare their tax returns online. Do they think about where their income and expenditure data are going? How long the data remain online? How well secured the data are? Small businesses store all their accounting information online (in the “cloud”) on sites that provide accounting services and access from anywhere. Do the business owners check the security of the sites? Several medical websites provide an easy place for people to store their medical records. Various companies offer services where people store all their data (email, photos, calendars, files) on the company’s servers, instead of on their own PC or laptop. You can store an inventory of your valuable property on the Web (for free) to help with insurance claims after a fire or tornado. The companies supplying this service might all be honest, but the data, if leaked or hacked, is a shopping list for thieves.
There are big advantages to all these services. They are convenient. We do not have to manage our own system. We do not have to do backups. We can get to our files from anywhere with Internet access. We can more easily share files and collaborate with others on projects. There are disadvantages too. We cannot access our files when the network is down or if there is a technical problem at the company that stores them. But the more serious risks are to privacy and security. We lose control. Outside our home, our files are at risk of loss, theft, misuse by employees, accidental exposure, seizure by government agencies, uses by the service provider described in an agreement or privacy policy we did not read, uses we ignored when signing up for the service, and later uses that no one anticipated. We might not care who else sees our vacation photos. We might decide the convenience of filling out tax forms online or storing our medical records online outweighs the risks. The point is to be aware and to make the decision consciously. For computer professionals, awareness of the risks should encourage care and responsibility in developing secure systems to protect the sensitive information people store online.
2.3.3 Location Tracking
Global positioning systems (GPS), cellphones, radio frequency identification (RFID) tags,� and other technologies and devices enable a variety of location-based applications— that is, computer and communications services that depend on knowing exactly where a person or object is at a particular time. Since the introduction of the iPhone, there has been an explosion in such applications. The applications are extraordinarily diverse
� RFID tags are small devices that contain an electronic chip and an antenna. The chip stores identification data (and possibly other data) and controls operation of the tag. The antenna transmits and receives radio signals for communicating with devices that read the tag.
80 Chapter 2 Privacy
and have significant benefits. However, they add detailed information about our current location and our past movements to the pool of information that computer systems store about us, with all the potential threats to privacy.
To analyze risks, we should always consider unintended, as well as intended, uses. Recall from Section 2.2.2 that law enforcement agencies locate people by locating their phone. Details of the technology are secret and the device is probably expensive. But that is temporary. Eventually there will be an app for that. So imagine that anyone can enter a person’s ID number (perhaps a phone number) on their own mobile device and ask where that person is now. Or perhaps a device could sweep a particular location and detect identifying devices of the people there—or identify them by face recognition. Who might a person not want to get this information? Thieves. A violent spouse or ex-spouse. A divorce lawyer. An annoying or nosy neighbor. A stalker. Co-workers or business associates. Anyone else who might object to your religion, politics, or sexual behavior. The government. (Oh, we see that our new teacher is at a meeting of Alcoholics Anonymous. Who is in that medical marijuana store or gun store right now?) Extensive records of where we were provide more details to the ever-growing profiles and dossiers businesses and governments build about us. With fast search, matching, and analysis tools, they can add more detail about who we spend time with and what we are doing. In Chapter 1, we mentioned that researchers learn about social organization and the spread of disease (among other things) by studying huge amounts of cellphone data. Such statistical data can be extremely valuable to us all, but a cellphone identifies a person, and, thus, the tracking information (if associated with the phone’s number or ID) is personal information and raises the usual issues of consent, potential secondary uses, risks of misuse, and so on. Care must be taken to ensure that such data are protected.
Tracking employees at work: Section 6.3.2
If accessed surreptitiously, stolen, disclosed accidentally, or acquired by government agencies, records of our location and movements pose threats to privacy, safety, and liberty. Privacy and industry organizations
are developing guidelines for use of location-tracking applications to implement principles in Figure 2.2 and protect against some of the risks.46
Studying the behavior of customers in a store or other facility is a big potential application of location tracking. For example, a supermarket or an amusement park might want to analyze customer traffic patterns within the facility to plan a better layout, to determine how much time people spend inside, or to analyze waiting times. The privacy implications and risks of monitoring people’s movements vary from little to great depending on how the tracking system does its work. Suppose, for example, an amusement park such as Disneyland wants to study visitor traffic patterns, detect crowds and long lines, and so on. It can do so with a location-emitting ticket that people get when they enter and discard when they leave. It need have no information connected to the person or family. For such a system, privacy is not an issue. There would be a temptation, however, to include demographic data and possibily identifying data on the tracker.
2.3 The Business and Social Sectors 81
Who’s at the Bar?
Hundreds of bars installed cameras with a face recognition system to provide data to a website and smartphone app. The app tells users the number of people at a particular bar, the male/female ratio, and the approximate age range. Each bar gets summary statistics on its patrons that could be useful for advertising or other business planning. The system does not identify individual people and does not store the video. So this is not a privacy issue. Or is it?
The point is that such an application can remain utterly unthreatening, or it can drift over the boundary into location tracking and privacy infringement. The bar owners do not control the system, so they cannot be certain
that what they tell their customers about it is true. (There are many examples of systems collecting and storing data without the knowl- edge of the businesses that use the system.) The developer and operator of the system might ex- ercise great care to protect patrons’ privacy, or they might succumb to temptation to add new features that require storing video or identi- fying individuals. Awareness of potential risks and understanding of good privacy practices are essential for both the software developers who invent and upgrade such systems and the managers who make decisions about what fea- tures to implement.
Tools for parents
Many technologies help parents track their children’s physical location. Cellphone services enable parents to check a child’s location from the parent’s mobile device. Devices installed in a car tell parents where their teens are and how fast they are driving. A company sells wireless watchband transmitters for children, so parents can monitor them. RFID tags in shoes and clothes can be monitored hundreds of feet away. These might be very helpful with young children who wander off in a crowded place.
Tracking children can increase safety, but there are parenting issues and risks involved in using tracking tools. At what age does tracking become an invasion of the child’s privacy? Should parents tell children about the tracking devices and services they are using? Informed consent is a basic principle for adults. At what age does it apply to children? Will intense tracking and monitoring slow the development of a child’s responsible independence?
A monitoring system that sends easily read or easily intercepted signals could decrease rather than increase the safety of a child. Child molesters and identity thieves could collect personal data. Parents need to be aware of potential for false alarms and for a false sense of security. For example, a child might lose a phone or leave a tagged article of clothing somewhere. Older kids might figure out how to thwart tracking systems. Clearly, how and when to use surveillance tools should involve thoughtful decisions for families.
Pets, prisoners, and people with Alzheimer’s disease can wear devices that locate them if they wander off. Veterinarians implant ID chips under the skin of pets and farm animals.
82 Chapter 2 Privacy
Foiling poachers, following turtles, tracking guitars
Owners tag very valuable and extremely rare plants, both in the wild and in gardens, with tracking chips so they can locate them if stolen.
Satellite technology and microprocessors enormously improved animal tracking. Sci- entists now attach tiny transmitters to rare birds and other animals to study their behavior and learn how to protect their food sources. Researchers learned that some animals travel much farther than previously thought: Sea turtles swim from the Caribbean to Africa. A nesting albatross flew from Hawaii to the San Francisco Bay, a weeklong round-trip, to get food for its young. To encourage interest from the public, researchers set up websites where we can follow the animals’ movements.47
These are valuable services. What happens when the same technologies track people?
I recently toured a guitar factory. The tour guide showed us a partially complete guitar neck. And there, on the front of the neck, was an RFID chip. The fret board, when attached to the neck, covers the chip. The guide explained how useful the chip was for tracking guitars through production and for finding a specific guitar in the stock room. The chip remains in the guitar when a customer buys it. Manufacturers put RFID tags in many other products, in addition to guitars, to track them through the manufacturing and sales processes. What is the potential for tracking people via the products they buy? Does it matter?
Some people have suggested doing this for prisoners and children. Does the suggestion of implanting tracking chips in people make you wonder if that is such a good idea? After heavy opposition from parents, a school dropped its proposal to require that all students wear an RFID-equipped device while on school grounds. The constant surveillance and the risks of misuse were enough, in the minds of many parents, to outweigh the benefits of a removable tracking device.
2.3.4 A Right to Be Forgotten
People sometimes want to remove information about themselves from the Internet or from a company’s records. It could be an offensive comment made in anger, a photo on one’s own social network page or a photo-sharing site, information in online directories, or personal data posted by others (e.g., on a genealogy site). It could be the profile an advertising company developed by tracking the person’s Web activity, a collection of data gleaned from the person’s smartphone use, or the collection of the person’s search queries that a search engine stores. It could be unflatering images or information that other people posted. It could be a search engine’s links to such material. Legislators and privacy
The right to be forgotten in the EU: Section 2.5.3
advocates in the United States and the European Union are promoting a legal right to demand that websites remove material about oneself. The right to have material removed, as a legal or ethical right, has come
to be called the “right to be forgotten.” The wide range of material a person might want to
2.3 The Business and Social Sectors 83
remove suggests many practical, ethical, social, and legal questions and criticisms about such a right.48
The policies of various websites about removing material vary. Some sites with mem- bers, such as social networks, respond to a member’s request to delete material the user posted and to delete a member’s material when the member closes the account. When the material is not in a user’s account, the situation is more complicated. Some sites, such as directories, collect information automatically; thus, deleted information can reappear. A filter system to prevent reposting for a particular person has the problem of correctly distinguishing that person from others with the same or similar names.
Should a company or website always comply with a request to delete a particular item or a person’s record any time a person makes such a request? We understand that people do foolish things and regret them later. It is reasonable to let many of them be forgotten. If a person wants to delete something he or she posted on a website, it is reasonable, courteous, good-spirited, and perhaps a good business policy to comply. If someone else posts compromising photos or information from a person’s past, removing it raises issues of free speech and truth. If the person is not a public figure and the information has no broad social value, removing it might be the reasonable, courteous thing to do. Complying with the request could be ethically acceptable and admirable but not ethically obligatory. In some cases, it could be a bad idea. The information might matter to people in a particular community. The person who posted it might have a good reason. The appropriate decision in specific cases might be difficult.
What about the data that advertisers and search engines collect about us? Must they, from an ethical standpoint, comply with a request from a person who wants his or her record deleted? If the companies collected the data secretly, without permission, or in violation of their stated privacy policies and terms of use, then there are good reasons to require its deletion independent of any right to be forgotten. Suppose the information is the set of a person’s search queries or something similar that a free website collects, and suppose the site makes its collection and use of the data clear in its terms of use. The company’s use of the data is, in part, our payment for the free service it provides. If the company agrees to delete people’s records upon request, it is providing its service to those people for free (or at a “discount” if they continue to view ads on the site). If a relatively small number of individuals request deletion of their data, a large company can probably afford to comply without significant inconvenience or reduction in the value it gets from analysis of user data. Many companies give some products and services for free. Again, complying with deletion requests could be ethically and socially admirable, good-spirited, and perhaps a good business policy. On the other hand, a person might make a deletion request to hide some illegal or offensive behavior or to remove evidence in a dispute of some kind.
If the right to be forgotten is a negative right (a liberty), it could mean that we may choose to stay off the Internet and become a recluse, but we cannot force someone else to remove a photo that we are in. As a positive right (a claim right), it is akin to requiring
84 Chapter 2 Privacy
that others erase their minds, as well as their photos, blogs, and links. It can mean that others may not write about a person or exchange specified information about the person— information gained without violating any of the person’s rights. This can infringe freedom of speech. In some applications, the right would mean that a person may break agreements (e.g., terms of use for a Web service) at will. There seems to be little if any basis for such an ethical right.
Are there contexts in which it makes sense to enforce a legal requirement to remove material when a person requests it? Perhaps for special populations, such as children (where parents might make the request or a young adult might want to remove seminude sexting photos sent to friends while in high school). Perhaps in other special situations. Legislators
Sexting: Section 3.2.3 must carefully craft any such legal requirement to avoid conflict with free speech, free flow of information, and contractual agreements. A
legal requirement to honor removal requests will be more of a burden to small sites than to large ones, which can develop software to help automate the process and have legal staffs to defend against complaints.
2.4 Government Systems
2.4.1 Databases
Federal and local government agencies maintain thousands of databases containing per- sonal information. Examples include tax, property ownership, medical, travel, divorce, voter registration, bankruptcy, and arrest records. Others include applications for gov- ernment grant and loan programs, professional and trade licenses, and school records (including psychological testing of children). And there are many, many more. Govern- ment databases help government agencies perform their functions, determine eligibility for government benefits programs, detect fraud in government programs, collect taxes, and catch people who are breaking laws. The scope of government activities is enormous, ranging from catching violent criminals to licensing flower arrangers. Governments can arrest people, jail them, and seize assets from them. Thus, the use and misuse of per- sonal data by government agencies pose special threats to liberty and personal privacy. It seems reasonable to expect governments to meet an especially high standard for privacy protection and adherence to their rules.
The Privacy Act of 1974 is the main law about the federal government’s use of personal data. A summary of the provisions of the Act appears in Figure 2.3. Although this law was an important step in attempting to protect our privacy from abuse by federal agencies, it has problems. The Privacy Act has, to quote one expert on privacy laws, “many loopholes, weak enforcement, and only sporadic oversight.”49 The E-Government Act of 2002 added some privacy regulations for electronic data and services—for example, requiring agencies
2.4 Government Systems 85
. Restricts the data in federal government records to what is “relevant and necessary” to the legal purpose for which the government collects it
. Requires federal agencies to publish a notice of their record systems in the Federal Register so that the public may learn about what databases exist
. Allows people to access their records and correct inaccurate information
. Requires procedures to protect the security of the information in databases
. Prohibits disclosure of information about a person without his or her consent (with several exceptions)
Figure 2.3 Provisions of the Privacy Act of 1974.
to conduct privacy impact assessments for electronic information systems and to post privacy policies on agency websites used by the public.
The Government Accountability Office (GAO) is Congress’ “watchdog agency.” Over the past 25 years, the GAO has released numerous studies showing lack of compliance with the Privacy Act and other privacy risks and breaches. The GAO reported in 1996 that White House staffers used a “secret” database with records on 200,000 people (including ethnic and political information) without adequate access controls. A GAO study of 65 government websites found that only 3% of the sites fully complied with the fair information standards for notice, choice, access, and security established by the Federal Trade Commission (FTC) for commercial websites. (The FTC’s site was one that did not comply.) The GAO reported that the Internal Revenue Service (IRS), the Federal Bureau of Investigation (FBI), the State Department, and other agencies that use data mining to detect fraud or terrorism did not comply with all rules for collecting information on citizens. The GAO found dozens of weaknesses in the operation of the government’s communication network for transmitting medical data in the Medicare and Medicaid programs—weaknesses that could allow unauthorized access to people’s medical records.50
The IRS is one of several federal government agencies that collects and stores infor- mation on almost everyone in the country. It is also a major secondary user of personal information. Year after year, hundreds of IRS employees are investigated for unauthorized snooping in people’s tax files. (An IRS employee who was a Ku Klux Klan member read tax records of members of his Klan group looking for income information that would indicate that someone was an undercover agent.) These abuses led to a law with tough penalties for government employees who snoop through people’s tax information with- out authorization. However, a GAO report a few years later found that while the IRS had made significant improvements, the tax agency still failed to adequately protect people’s financial and tax information. IRS employees were able to alter and delete data without authorization. Employees disposed of disks with sensitive taxpayer information without
86 Chapter 2 Privacy
erasing files. Hundreds of tapes and diskettes were missing. A report by the Treasury’s Inspector General said that the IRS did not adequately protect taxpayer information on more than 50,000 laptops and other storage media. Personal financial information that taxpayers provide to the IRS is “at risk” from hackers and disgruntled employees because many of the 250 state and federal agencies to which the IRS provides taxpayer information do not have adequate safeguards.51
Various reviews of compliance with the Privacy Act and the E-Government Act have highlighted weaknesses in these laws. The GAO advocated modifying the Privacy Act to cover all personally identifiable information collected and used by the federal government, thus closing gaping loopholes that exempt much government use of personal information from the law’s provisions. The GAO advocated stricter limits on use of personal information. Recognizing that most people do not read the Federal Register, the GAO suggested better ways of informing the public about government databases and privacy policies. The Information Security and Privacy Advisory Board (a government advisory board) pointed out: “The Privacy Act does not adequately cover government use of commercially-compiled databases of personal information. The rules about the federal government’s use of commercial databases, and even use of information gleaned from commercial search engines, have been vague and sometimes non-existent.” Thus, agencies can bypass the protections of the Privacy Act by using private-sector databases and searches, rather than collecting the information itself.52
Quis custodiet ipsos custodes? (Who will guard the guards themselves?)
—Juvenal
Database example: tracking college students
The U.S. Department of Education proposed establishing a database to contain the records of every student enrolled in a college or university in the United States. The pro- posal would require colleges and universities to provide and regularly update the records including each student’s name, gender, Social Security number, major, courses taken, courses passed, degrees, loans, and scholarships (public and private). The government would keep the data indefinitely. The department has not yet implemented the proposal because of intense opposition. The government already has similar databases, and pro- posals for massive government databases of personal information appear regularly. We discuss this one as an example for analysis; the issues and questions we raise here apply in many other situations.
The student database would have many beneficial uses: The federal government spends billions of dollars each year on federal grants and loans to students but has no good way to measure the success of these programs. Do students who get aid graduate? What majors do they pursue? The database would help evaluate federal student aid programs
2.4 Government Systems 87
and perhaps lead to improvements in the programs. The database would provide more accurate data on graduation rates and on actual college costs. The ability to track the number of future nurses, engineers, teachers, and so on, in the educational pipeline can help shape better immigration policy and business and economic planning.
On the other hand, the collection of so much detail about each student in one place generates a variety of privacy risks. Several of the points in the list in Section 2.1.2 are relevant here. It is very likely that the government would find new uses for the data that are not part of the original proposal. Such a database could be an ideal target for identity thieves. Leaks of many sorts are possible and likely. There is potential for abuse by staff
More about identity theft: Section 5.3
members who maintain the data; for example, someone might release college records of a political candidate. And there would undoubtedly be errors in the database. If the department limits the data’s use to
generalized statistical analysis, errors might not have a big impact, but for some potential uses, the errors could be quite harmful.
Some educators worry that a likely eventual link between the database and public school databases (on children in kindergarten through high school) would contribute to “cradle-to-grave” tracking of childhood behavior problems, health and family issues, and so on.53
The planned uses of the database do not include finding or investigating students who are breaking laws, but it would be a tempting resource for law enforcement agencies. A Virginia state law requires colleges to provide the names and other identifying information for all students they accept. State police then check if any are in sex-offender registries. What else might they check for? What other government agencies might want access
Risks from errors in sex-offender registries: Section 8.1.2
to a federal student database? Would the Defense Department use the database for military recruiting? What potential risks arise if employers get access? All such uses would be secondary uses, without the consent of the students.
It makes sense for the government to monitor the effectiveness of the grants and loans it gives to college students. It is therefore reasonable to require data on academic progress and graduation from students who receive federal money or loan guarantees. But what justifies demanding the data on all other students? For statistics and planning, the government can do voluntary surveys, just as businesses and organizations, without the government’s power of coercion, must do. Are the benefits of the database central enough to the fundamental responsibilities of government to outweigh the risks and to justify a mandatory reporting program of so much personal data on every student?�
� Critics of the proposal, including many universities, point out other risks and costs besides privacy. Colleges fear that collection of the data would lead to increased federal control and interference in management of colleges. The reporting requirements would impose a high cost on the schools. The whole project would have high costs to taxpayers.
88 Chapter 2 Privacy
The U.S. Census
The U.S. Constitution authorizes and requires the government to count the people in the United States every 10 years, primarily for the purpose of determining the number of Con- gressional representatives each state will have. Between 1870 and 1880, the U.S. population increased by 26%. It took the government nine years to process all the data from the 1880 census. During the 1880s, the population in- creased by another 25%. If the Census Bureau used the same methods, it would not complete processing data from the 1890 census until after the 1900 census was to begin. Herman Hollerith, a Census Bureau employee, designed and built punch-card processing machines— tabulators, sorters, and keypunch machines— to process census data.* Hollerith’s machines did the complete 1890 population count in only six weeks—an amazing feat at the time. The Bureau completed the rest of the process- ing of the 1890 census data in seven years. It could have been done sooner, but the new ma- chines allowed sophisticated and comprehen- sive analysis of the data that was not possible before. Here is an early example of comput- ing technology enabling increased processing of data with the potential for good and bad effects: better use of information and invasion of privacy.
The Census Bureau requires everyone to provide name, gender, age, race, and relation- ship to people one lives with. It requires three million households a year to fill out a longer form that contains questions about marital history, ancestry, income, details about one’s
home, education, employment, disabilities, ex- penditures, and other topics.
Census information is supposed to be con- fidential. Federal law says that “in no case shall information furnished . . . be used to the detriment of any respondent or other person to whom such information relates.”54
During World War I, the Census Bureau provided names and addresses of young men to the government to help find and prosecute draft resisters. During World War II, the Census Bureau assisted the Justice Department in using data from the 1940 census to find U.S. citizens of Japanese ancestry; the army rounded up Japanese-Americans and put them in internment camps. With the introduction of electronic computers and the advances in computing technology, using the data “to the detriment of any respondent” is easier. Some cities used census data to find poor families who violated zoning or other regulations by doubling up in single-family housing. They evicted the families. A few years after the 9/11 terrorist attacks, at the request of the Department of Homeland Security, the Census Bureau prepared lists showing the number of people of Arab ancestry in various zip codes throughout the United States. A government spokesperson said they needed the data to determine which airports should have signs in Arabic. Privacy and civil liberties organizations were skeptical.55
* The company Hollerith formed to sell his machines later became IBM.
When considering each new system or policy for personal data use or data mining by government, we should ask many questions: Is the information it uses or collects accurate and useful? Will less intrusive means accomplish a similar result? Will the system inconvenience ordinary people while being easy for criminals and terrorists to thwart?
2.4 Government Systems 89
How significant are the risks to innocent people? Are privacy protections built into the technology and into the rules controlling usage?
Fighting terrorism
Before the terrorist attacks on the United States on September 11, 2001, law enforcement agencies lobbied regularly for increased powers that conflicted with privacy. Sometimes they got what they wanted; sometimes they did not. Generally, people resisted privacy intrusion by government. After the attacks on the World Trade Center and the Pentagon, more people became willing to accept uses of personal data and forms of search and surveillance that would have generated intense protest before. Two examples are the intrusive searches at airports and the Transportation Security Administration’s (TSA) requirement that airlines provide the name and birth date of every passenger to the TSA so that it can match people against its watch list. In 2012, the government extended to
Errors in terrorism watch lists: Section 8.1.2
five years the amount of time the National Counterterrorism Center may store data on Americans with no known connection to terrorism or criminal activity.
Proposals for new data mining programs to find terrorists and terrorist plots continue to appear. We summarize an interesting point Jeff Jonas and Jim Harper present about the suitability of data mining for this purpose.56 Marketers make heavy use of data mining. They spend millions of dollars analyzing data to find people who are likely to be customers. How likely? In marketing, a response rate of a few percent is considered quite good. In other words, expensive, sophisticated data mining has a high rate of false positives. Most of the people whom data mining identifies as potential customers are not. Many targeted people will receive ads, catalogs, and sales pitches they do not want. Junk mail and pop- up ads annoy people, but they do not significantly threaten civil liberties. A high rate of false positives in data mining for finding terrorist suspects does. Data mining might be helpful for picking terrorists out of masses of consumer data, but appropriate procedures
Reducing privacy intrusions for air travel
Travelers are familiar with x-ray scanning ma- chines at airports. The machines display on a computer screen the image of a person’s body and any weapons and packets of drugs hidden under clothing and wigs. The American Civil Liberties Union (ACLU) describes the scan as “a virtual strip search.” In response to strong objections from the public and privacy advo-
cates, the TSA modified the software to display a generic line drawing of a body, instead of the x-ray image of the actual person scanned.57
Why didn’t the TSA build in this obvious privacy-protecting feature at the beginning? There might be technical problems, but per- haps they did not because no law or regulation requires such privacy protection.
90 Chapter 2 Privacy
are essential to protect innocent but mistakenly selected people. Jonas and Harper argue that other methods for finding terrorists are more cost-effective and less threatening to the privacy and civil liberties of large numbers of people.
2.4.2 Public Records: Access versus Privacy
Governments maintain “public records,” that is, records that are available to the general public. Examples include bankruptcy records, arrest records, marriage license applica- tions, divorce proceedings, property-ownership records (including mortgage informa- tion), salaries of government employees, and wills. These have long been public, but by and large they were available only on paper in government offices. Lawyers, private in- vestigators, journalists, real estate brokers, neighbors, and others use the records. Now that it is so easy to search and browse through files on the Web, more people access pub- lic records for fun, for research, for valid personal purposes—and for purposes that can threaten the peace, safety, and personal secrets of others.
Public records include sensitive information such as Social Security numbers, birth dates, and home addresses. Maricopa County in Arizona, the first county to put numerous and complete public records on the Web, had the highest rate of identity theft in the United States.58 Obviously, certain sensitive information should be withheld from public-
More about identity theft: Section 5.3
record websites. That requires decisions about exactly what types of data to protect. It requires revisions to government software systems to prevent display of specified items. Because of the expense and lack
of accountability, incentives within government agencies to do this are weak. A few have adopted policies to block display of sensitive data in files posted online, and some states have laws requiring it. Several software companies produced software for this purpose, using a variety of techniques to search documents for sensitive data and protect them. Until new systems—in which such security is part of the basic design—replace older systems, the patches and add-ons, while helpful, are likely to miss a lot of sensitive data.
To illustrate more issues about public records and potential solutions, we describe a few kinds of specialized information (political contributions, flight information for private airplanes, and the financial statements of judges), then raise some questions.
Political campaign committees must report the name, address, employer, and dona- tion amount for every donor who contributes more than $100 to a candidate for president. This information is available to the public. In the past, primarily journalists and rival cam- paigns examined it. Now it is on the Web and easy to search. Anyone can find out what candidate their neighbors, friends, employees, and employers support. We can also find the addresses of prominent people who might prefer to keep their address secret to protect their peace and privacy.
The pilots of the roughly 10,000 company airplanes in the United States file a flight plan when they fly. A few businesses have combined this flight information, obtained
2.4 Government Systems 91
from government databases, with aircraft registration records (also public government records) to provide a service telling where a particular plane is, where it is going, when it will arrive, and so on. Who wants this information? Competitors can use it to determine with whom top executives of another company are meeting. Terrorists could use it to track movements of a high-profile target. The information was available before, but not so easily and anonymously.
Federal law requires federal judges to file financial disclosure reports.59 The public can review these reports to determine whether a particular judge might have a conflict of interest in a particular case. The reports were available in print but not online. When an online news agency sued to make the reports available online, judges objected that information in the reports can disclose where family members work or go to school, putting them at risk from defendants who are angry at a judge. Ultimately, the reports were provided for posting online, with some sensitive information removed.60
The change in ease of access to information changes the balance between the advan- tages and disadvantages of making some kinds of data public. Whenever access changes significantly, we should reconsider old decisions, policies, and laws. Do the benefits of requiring reporting of small political contributions outweigh the privacy risks? Do the benefits of making all property ownership records public outweigh the privacy risks? Maybe. The point is that such questions should regularly be raised and addressed.
How should we control access to sensitive public records? Under the old rules for the financial statements of judges, people requesting access had to sign a form disclosing their identity. This is a sensible rule. The information is available to the public, but the record of who accessed it could deter most people intent on doing harm. Can we implement a similar system online? Technologies for identifying and authenticating people online are developing, but they are not yet widespread enough for use by everyone accessing sensitive public data on the Web. We might routinely use them in the future, but that raises another question: How will we distinguish data that requires identification and a signature for access from data the public should be free to view anonymously, to protect the viewer’s privacy?61
2.4.3 National ID Systems
In the United States, national identification systems began with the Social Security card in 1936. In recent decades, concerns about illegal immigration and terrorism provided the most support for a more sophisticated and secure national ID card. Opposition, based on concerns about privacy and potential abuse (and cost and practical problems), prevented significant progress on a variety of national ID proposals made by many government agencies. In this section, we review Social Security numbers, various issues about national ID systems, and the REAL ID Act, a major step toward turning driver’s licenses into national ID cards.
92 Chapter 2 Privacy
Social Security numbers62
The history of the Social Security number (SSN) illustrates how the use of a national identification system grows. When SSNs first appeared in 1936, they were for the exclusive use of the Social Security program. The government assured the public at the time that it would not use the numbers for other purposes. Only a few years later, in 1943, President Roosevelt signed an executive order requiring federal agencies to use the SSN for new record systems. In 1961, the IRS began using it as the taxpayer identification number. So employers and others who must report to the IRS require it. In 1976, state and local tax, welfare, and motor vehicle departments received authority to use the SSN. A 1988 federal law requires that parents provide their SSN to get a birth certificate for a child. In the 1990s, the Federal Trade Commission encouraged credit bureaus to use SSNs. A 1996 law required that states collect SSNs for occupational licenses, marriage licenses, and other kinds of licenses. Also in 1996, Congress required that all driver’s licenses display the driver’s SSN, but it repealed that law a few years later due to strong protests. Although the government promised otherwise, the SSN has become a general identification number.
We use our Social Security number for identification for credit, financial services, and numerous other services, yet its insecurity compromises our privacy and exposes us to fraud and identity theft. For example, a part-time English teacher at a California junior college used the Social Security numbers of some of her students, provided on her class lists, to open fraudulent credit card accounts. Because the SSN is an identifier in so many databases, someone who knows your name and has your SSN can, with varying degrees of ease, get access to your work and earnings history, credit report, driving record, and other personal data. SSNs appear on public documents and other openly available forms. Property deeds, which are public records (and now online), often require SSNs. For decades, SSNs were the ID numbers for students and faculty at many universities; the numbers appeared on the face of ID cards and on class rosters. The state of Virginia included SSNs on published lists of voters until a federal court ruled that its policy of requiring the SSN for voter registration was unconstitutional. Some employers used the SSN as an identifier and put it on badges or gave it out on request. Many companies, hospitals, and other organizations to which we might owe a bill request our SSN to run a credit check. Some routinely ask for an SSN and record it in their files, although they do not need it.
More than 30 years ago, the U.S. Department of Agriculture (USDA) began including the SSN as part of the ID number for farmers who received loans or grants. In 2007, the USDA admitted that since 1996 it had inadvertently included the SSNs of more than 35,000 farmers on the website where it posted loan details.63 This example illustrates how practices begun well before the Web have continuing repercussions. It also illustrates the importance of careful and thorough evaluation of decisions to put material on the Web. There are likely many similar examples that no one has yet noticed.
SSNs are too widely available to securely identify someone. Social security cards are easy to forge, but that hardly matters, because those who request the number rarely ask for
2.4 Government Systems 93
the card and almost never verify the number. The Social Security Administration itself used to issue cards without verification of the information provided by the applicant. Criminals have little trouble creating false identities, while innocent, honest people suffer disclosure of personal information, arrest, fraud, destruction of their credit rating, and so on, because of problems with the SSN.
Gradually, governments and businesses began to recognize the risks of careless use of the SSN and reasons why we should not use it so widely. It could take a long time to undo the damage its widespread use has already done to privacy and financial security.
A new national ID system
Places like Nazi Germany, the Soviet Union, and apartheid South Africa all had very robust identification systems. True, identification systems do not cause tyranny, but identification systems are very good administrative systems that tyrannies often use.
—Jim Harper, Director of Information Policy Studies, Cato Institute64
Various national ID card proposals in recent years would require citizenship, employment,
More about biometrics: Section 5.3.3
health, tax, financial, or other data, as well as biometric information such as fingerprints or a retina scan, depending on the specific proposal and the government agency advocating it. In many proposals, the cards
would also access a variety of databases for additional information. Advocates of national ID systems describe several benefits: You would need the
actual card, not just a number, to verify identity. The cards would be harder to forge than Social Security cards. A person would need to carry only one card, rather than separate cards for various services as we do now. The authentication of identity would help reduce fraud both in private credit card transactions and in government benefit programs. Use of ID cards for verifying work eligibility would prevent people from working in the United States illegally. Criminals and terrorists would be easier to track and identify.
Opponents of national ID systems argue that they are profound threats to freedom and privacy. “Your papers, please” is a demand associated with police states and dictatorships. In Germany and France, identification papers included the person’s religion, making it easy for the Nazis to capture and remove Jews. Under the infamous pass laws of South Africa, people carried passes, or identification papers, that categorized them by race and controlled where they could live and work. Cards with embedded chips or magnetic strips and the large amount of personal information they can carry or access have even more potential for abuse. Most people would not have access to the machinery that reads the cards. Thus, they would not always know what information they are giving others about themselves. Theft and forgery of cards would reduce some of the potential benefits. Peter Neumann and Lauren Weinstein warned of risks that arise from the databases
94 Chapter 2 Privacy
and communication complexes that would support a national ID card system: “The opportunities for overzealous surveillance and serious privacy abuses are almost limitless, as are opportunities for masquerading, identity theft, and draconian social engineering on a grand scale.”65
A woman in Canada could not get her tax refund because the tax agency insisted she was dead. Her identification number had been mistakenly reported in place of her mother’s when her mother died. She would still have been able to get a new job, withdraw money from her bank account, pay her rent, send email, and go to her doctor while she was resolving the problem with the tax agency. What if the worker verification database connected to the death records database? Or what if a mistake cancelled the one ID card required for all these transactions? A critic of a proposal for a national identification card in Australia described the card as a “license to exist.”66
The REAL ID Act attempts to develop a secure national identification card by setting federal standards for driver’s licenses (and state-issued ID cards, for people without driver’s licenses). Licenses must meet the federal standards for use for identification by the federal government. Such purposes include airport security and entering federal facilities. By implication, they likely include working for the federal government and obtaining federal benefits. It is likely that the government will add many new uses, as it did with the Social Security number. Businesses and state and local governments are likely to require the federally approved ID card for many transactions and services. The federal government pays for approximatley half the medical care in the United States (for example, Medicare, benefits for veterans, and numerous federally funded programs). It is not hard to envision requiring the driver’s license for federal medical services and eventually it becoming a de facto national medical ID card.
The REAL ID Act requires that, to get a federally approved driver’s license or ID card, each person must provide documentation of address, birth date, Social Security number, and legal status in the United States. Motor vehicle departments must verify each person’s information, in part by accessing federal databases such as the Social Security database. The departments must scan documents submitted by drivers and store them in transferable form, for at least 10 years (making motor vehicle records a desirable target for identity thieves). The licenses must satisfy various requirements to reduce tampering and counterfeiting, and they must include the person’s photo and machine-readable information to be determined by the Department of Homeland Security.
The REAL ID Act puts the burden of verifying identity on individuals and the state motor vehicle departments. Errors in federal databases used for verification could pre- vent people from getting their driver’s licenses. Many states object to the mandate and
Accuracy of worker verification database: Section 6.3.1
its high costs (estimated in billions of dollars). More than 20 states passed resolutions refusing to participate. Residents in states without a federally approved driver’s license could experience serious inconve- nience. Congress passed REAL ID in 2005, and it was originally to
take effect in 2008. The Department of Homeland Security extended the deadline for
2.5 Protecting Privacy: Technology, Markets, Rights, and Laws 95
compliance several times, while some members of Congress have been working to modify or repeal REAL ID. As I write this, the deadline remains in the future, and Congress has not repealed the law.
Many European and Asian countries require national ID cards. An unpopular plan for an expensive mandatory national ID card in the United Kingdom stalled when emails about weaknesses of the plan leaked from government offices. The government of Japan implemented a national computerized registry system that included assigning an ID number to every citizen of the country. The system is for government purposes, initially with approximately 100 applications, but eventually its uses will probably be in the thousands. The intention is to simplify administration procedures and make them more efficient. Privacy advocates and protesters have complained of insufficient privacy protection, potential abuse by government, and vulnerability to hackers. The Indian government is building a national ID database for its 1.2 billion people. The database will include each person’s photo, fingerprints, iris scan, birth date, and other information. Its stated purposes include improving provision of government services and catching illegal immigrants.
As soon as you are willing to put your home, your office, your safe deposit box, your bike lock, your gym key, and your desk key all onto one and ask the government to issue that one key, you will be okay with the national ID. But until then, we need to think more in terms of diversification of identification systems.
—Jim Harper, Director of Information Policy Studies, Cato Institute67
2.5 Protecting Privacy: Technology, Markets, Rights, and Laws
2.5.1 Technology and Markets
Many individuals, organizations, and businesses help meet the demand for privacy to some degree: Individual programmers post free privacy-protecting software on the Web. Entrepreneurs build new companies to provide technology-based privacy protections. Large businesses respond to consumer demand and improve policies and services. Organi- zations such as the Privacy Rights Clearinghouse provide excellent information resources. Activist organizations such as the Electronic Privacy Information Center inform the pub- lic, file lawsuits, and advocate for better privacy protection.
New applications of technology can often solve problems that arise as side effects of technology. Soon after “techies” became aware of the use of cookies by Web sites, they wrote cookie disablers and posted them on the Web. Software to block pop-up ads appeared soon after the advent of such ads. People figured out how to prevent ads from appearing in their Gmail and told the world. Companies sell software to scan for spyware;
96 Chapter 2 Privacy
some versions are free. We can install free add-ons to our browsers that block Web activity trackers. Several companies provide services, called anonymizers, with which people can surf the Web anonymously, leaving no record that identifies them or their computers. Some search engines do not store user search queries in a way that allows linking them
More about anonymizers: Section 3.4
together to one person.68 Companies offer products and services to prevent forwarding, copying, or printing email. (Lawyers are among the major customers.) There are services that fully erase email or text
messages (on both the sender’s and recipient’s phones) after a user-specified time period. They can be helpful for doctors, who must follow very strict medical privacy regulations. Some tracking systems for laptops, tablets, and phones include a feature that allows the owner of a stolen or lost laptop to encrypt, retrieve, and/or erase files remotely.
These are a very few examples of the many products and technology applications that protect privacy. They illustrate that individuals, businesses, and organizations are
Protections against iden- tity theft: Section 5.3.2
quick to respond and make privacy-protecting tools available. They have advantages and disadvantages; they do not solve all problems. Learning about, installing, and using privacy tools might be daunting
to nontechnical, less educated users—a large part of the public—hence the importance of designing systems with privacy protection in mind, building in protective features, and having privacy-protecting policies.
Encryption
Cryptography is the art and science of hiding data in plain sight.
—Larry Loen69
It is possible to intercept email and data in transit on the Internet and to pick wireless transmissions out of the air. Someone who steals a computer or hacks into one can view files on it. Most eavesdropping by private citizens is illegal. Hacking and stealing laptops are crimes. The law provides for punishment of offenders when caught and convicted, but we can also use technology to protect ourselves.
Encryption is a technology, often implemented in software, that transforms data into a form that is meaningless to anyone who might intercept or view it. The data could be email, business plans, credit card numbers, images, medical records, cellphone location history, and so on. Software at the recipient’s site (or on one’s own computer) decodes encrypted data so that the recipient or owner can view the messages or files. Software routinely encrypts credit card numbers when we send them to online merchants. People are often not even aware that they are using encryption. The software handles it automatically.
Many privacy and security professionals view encryption as the most important technical method for ensuring the privacy of messages and data sent through computer networks. Encryption also protects stored information from intruders and abuses by
2.5 Protecting Privacy: Technology, Markets, Rights, and Laws 97
employees. It is the best protection for data on laptops and other small data storage devices carried outside an office.
Encryption generally includes a coding scheme, or cryptographic algorithm, and specific sequences of characters (e.g., digits or letters), called keys, used by the algorithm. Using mathematical tools and powerful computers, it is sometimes possible to “break” an encryption scheme—that is, to decode an encrypted message or file without the secret key.
Modern encryption technology has a flexibility and variety of applications beyond protecting data. For example, it is used to create digital signatures, authentication meth- ods, and digital cash. Digital signature technology allows us to “sign” documents online, saving time and paper for loan applications, business contracts, and so on. In one spe- cialized authentication application, aimed at reducing the risk of unauthorized access to medical information online, the American Medical Association issues digital credentials to doctors that a laboratory website can verify when a doctor visits to get patient test results. There are likely to be thousands of applications of this technology.
Digital cash and other encryption-based privacy-protected transaction methods can let us do secure financial transactions electronically without the seller acquiring a credit card or checking account number from the buyer. They combine the convenience of credit card purchases with the anonymity of cash. With such schemes, it is not easy to link records of different transactions to form a consumer profile or dossier. These techniques can provide both privacy protection for the consumer with respect to the organizations he or she interacts with and protection for organizations against forgery, bad checks, and credit card fraud. However, cash transactions make it harder for governments to detect and prosecute people who are “laundering” money earned in illegal activities, earning money they are not reporting to tax authorities, or transferring or spending money for criminal purposes. Thus, most governments would oppose and probably prohibit a truly anonymous digital cash system. Some digital cash systems include provisions for law enforcement and tax collection. The potential illegal uses of digital cash have long been possible with real cash. It is only in recent decades, with increased use of checks and credit cards, that we lost the privacy we had from marketers and government when we used cash for most transactions.
The technologies of anonymity and cryptography may be the only way to protect privacy.
—Nadine Strossen, president of the American Civil Liberties Union70
Policies for protecting personal data
The businesses, organizations, and government agencies that collect and store personal data have an ethical responsibility (and in many cases a legal one) to protect it from misuse. Responsible data holders must anticipate risks and prepare for them. They must continually update security policies to cover new technologies and new potential threats.
98 Chapter 2 Privacy
Encryption Policy
For centuries before the Internet, governments, their military agencies, and their spies were the main users of codes. For decades, most of the cryptographers in the United States worked for the National Security Agency (NSA). The NSA almost
More about the NSA: Section 2.6.3
certainly could break virtually any codes that were in use until the early 1970s.71 The NSA
worked hard to keep everything about encryption secret. In the 1970s, a private-sector breakthrough called public key cryptography produced encryption that was relatively easy to use and very difficult to crack. Keeping encryption as an exclusive tool of governments and spies was no longer an option.
Throughout the 1990s, when people began using encryption for email and other purposes, the U.S. government battled the Internet community and privacy advocates to restrict the availability of secure encryption (that is, encryption that is so difficult and expensive to crack that it is not practical to do so.) It maintained a costly and ultimately futile policy of prohibiting export of powerful encryption software. The government interpreted anything posted on the Internet as effectively exported. Thus, even researchers who posted encryption algorithms on the Net faced possible prosecution. The government argued that the export prohibition was necessary to keep strong encryption from terrorists and enemy governments. The U.S. policy was strangely out of date. The stronger encryption schemes were available on Internet sites all over the world.
The National Research Council (the research af- filiate of the National Academy of Sciences) strongly supported the use of powerful encryption and the loosening of export controls. It argued that strong encryption provides increased protection against hackers, thieves, and terrorists who threaten our economic, energy, and transportation infrastruc- tures.72 The need for strong encryption in electronic commerce was becoming obvious as well.
Concurrently with the ban on export of strong encryption, the government attempted to ensure its access to encryption keys (or to the unencrypted content of encrypted messages) for encryption used
within the United States. Pedophiles and child molesters encrypt child pornography on their com- puters. Other criminals encrypt email and files to hide their contents from law enforcement agents. The FBI supported a bill requiring a loophole, or “backdoor,” in all encryption products made, sold, or used in the United States to permit immediate decryption of the encrypted data upon the receipt of a court order.73 The FBI argued that authority to intercept telephone calls or email or seize com- puters meant nothing if agents could not read what they seized. Technical experts argued that such a law would be extraordinarily difficult to implement be- cause encryption is now part of Web browsers and many other common computing tools. Implementa- tion of an immediate decryption mechanism would threaten privacy and seriously weaken security of electronic commerce and communications.
During the same time, courts considered legal challenges to the export restrictions based on the First Amendment. The question is whether cryp- tography algorithms, and computer programs in general, are speech and hence protected by the First Amendment. The government argued that software is not speech and that control of cryptography was a national security issue, not a freedom-of-speech issue. The federal judge who heard the case thought otherwise. She said:
This court can find no meaningful difference be- tween computer language . . . and German or French. . . . Like music and mathematical equa- tions, computer language is just that, language, and it communicates information either to a computer or to those who can read it. . . . For the purposes of First Amendment analysis, this court finds that source code is speech.74
The U.S. government removed almost all export restrictions on encryption in 2000. Congress did not pass a law requiring all encryption to have a mecha- nism for law enforcement access. Among thousands of wiretaps approved for criminal investigations in 2010, law enforcement agents encountered encryp- tion only six times and were able to obtain the plain text of the messages.75
2.5 Protecting Privacy: Technology, Markets, Rights, and Laws 99
Employers must train those who carry around personal data about the risks and proper security measures.
A well-designed database for sensitive information includes several features to protect against leaks, intruders, and unauthorized employee access. Each person with authorized access to the system should have a unique identifier and a password. A system can restrict users from performing certain operations, such as writing or deleting, on some files. User IDs can be coded so that they give access to only specific parts of a record. For example, a billing clerk in a hospital does not need access to the results of a patient’s lab tests. The computer system keeps track of information about each access, including the ID of the person looking at a record and the particular information viewed or modified. This is an audit trail that can later help trace unauthorized activity. The knowledge that a system contains such provisions will discourage many privacy violations.
Databases with consumer information, Web-activity records, or cellphone location data are valuable assets that give businesses a competitive advantage. The owners of such data have an interest in preventing leaks and unlimited distribution. That includes providing security for the data and developing modes of operation that reduce loss. Thus, for example, mailing lists are usually not sold; they are “rented.” The renter does not receive a copy (electronic or otherwise). A specialized firm does the mailing. The risk of unauthorized copying is thus restricted to a small number of firms whose reputation for honesty and security is important to their business. Other applications also use this idea of trusted third parties to process confidential data. Some car rental agencies access a third- party service to check the driving record of potential customers. The service examines the motor vehicle department records; the car rental company does not see the driver’s record.
Website operators pay thousands, sometimes millions, of dollars to companies that do privacy audits. Privacy auditors check for leaks of information, review the company’s privacy policy and its compliance with that policy, evaluate warnings and explanations on its website that alert visitors when the site requests sensitive data, and so forth. Hundreds of large businesses have a position called chief privacy officer . This person guides company privacy policy. Just as the Automobile Association of America rates hotels, the Better Business Bureau and similar organizations offer a seal of approval, an icon companies that comply with their privacy standards can post on websites.
Large companies use their economic influence to improve consumer privacy. IBM and Microsoft removed Internet advertising from websites that do not post clear privacy policies. Walt Disney Company and Infoseek Corporation did the same and, in addition, stopped accepting advertising on their websites from sites that do not post privacy policies. The Direct Marketing Association adopted a policy requiring its member companies to inform consumers when they will share personal information with other marketers and to give people an opt-out option. Many companies agreed to limit the availability of sensitive consumer information, including unlisted telephone numbers, driving histories, and all information about children.
100 Chapter 2 Privacy
There continue, of course, to be many businesses without strong privacy policies, as well as many that do not follow their own stated policies. The examples described here represent a trend, not a privacy utopia. They suggest actions responsible companies can take. As some problems are addressed, new ones continually arise.
2.5.2 Rights and Law
In Section 2.2, we considered some aspects of law and Fourth Amendment principles related to protection of privacy. The Fourth Amendment protects the negative right (a liberty) against intrusion and interference by government. This section focuses mainly on discussion of principles related to rights and legal protections for personal data collected or used by other people, businesses, and organizations.
We separate legal remedies from technical, management, and market solutions be- cause they are fundamentally different. The latter are voluntary and varied. Different people or businesses can choose from among them. Law, on the other hand, is enforced by fines, imprisonment, and other penalties. Thus, we should examine the basis for law more carefully. Privacy is a condition or state we can be in, like good health or financial security. To what extent should we have a legal right to it? Is it a negative right or a positive right (in the sense of Section 1.4.2)? How far should law go, and what should be left to the voluntary interplay of markets, educational efforts of public interest groups, consumer choices and responsibilities, and so forth?
Until the late 19th century, courts based legal decisions supporting privacy in social and business activities on property rights and contracts. There was no recognition of an independent right to privacy. In 1890, a crucial article called “The Right of Privacy,” by Samuel Warren and Louis Brandeis76 (later a Supreme Court Justice), argued that privacy was distinct from other rights and needed more protection. Judith Jarvis Thomson, an MIT philosopher, argued that the old view was more accurate, that in all cases where infringement of privacy is a violation of someone’s rights, that violation is of a right distinct from privacy.77 We present some of the claims and arguments of these papers. Then we consider a variety of other ideas and perspectives about laws to protect privacy.
One purpose of this section is to show the kinds of analyses that philosophers, legal scholars, and economists perform in trying to elucidate underlying principles. Another is to emphasize the importance of principles, of working out a theoretical framework in which to make decisions about particular issues and cases.
Warren and Brandeis: The inviolate personality
The main target of criticism in the 1890 Warren and Brandeis article is newspapers, especially the gossip columns. Warren and Brandeis vehemently criticize the press for “overstepping . . . obvious bounds of propriety and decency.” The kinds of information of most concern to them are personal appearance, statements, acts, and interpersonal relationships (marital, family, and others).78 Warren and Brandeis take the position that
2.5 Protecting Privacy: Technology, Markets, Rights, and Laws 101
people have the right to prohibit publication of facts about themselves and photographs of themselves. Warren and Brandeis argue that, for example, if someone writes a letter in which he says he had a fierce argument with his wife, the recipient of the letter cannot publish that information. They base this claim on no property right or other right except privacy. It is part of the right to be let alone. Warren and Brandeis base their defense of privacy rights on, in their often-quoted phrase, the principle of “an inviolate personality.”
Laws against other wrongs (such as slander, libel, defamation, copyright infringement, violation of property rights, and breach of contract) can address some privacy violations, but Warren and Brandeis argue that there remain many privacy violations that those other laws do not cover. For example, publication of personal or business information could constitute a violation of a contract (explicit or implied), but there are many cases in which the person who discloses the information has no contract with the victim. The person is not violating a contract but is violating the victim’s privacy. Libel, slander, and defamation laws protect us when someone spreads false and damaging rumors about us, but they do not apply to true personal information whose exposure makes us uncomfortable. Warren and Brandeis say privacy is distinct and needs its own protection. They allow exceptions for publication of information of general interest (news), use in limited situations when the information concerns another person’s interests, and oral publication. (They were writing before radio and television, so oral publication meant a quite limited audience.)
Judith Jarvis Thomson: Is there a right to privacy?
Judith Jarvis Thomson argues the opposite point of view. She gets to her point after examining a few scenarios.
Suppose you own a copy of a magazine. Your property rights include the right to refuse to allow others to read, destroy, or even see your magazine. If someone does anything to your magazine that you did not allow, that person is violating your property rights. For example, if someone uses binoculars to see your magazine from a neighboring building, that person is violating your right to exclude others from seeing it. It does not matter whether the magazine is an ordinary news magazine (not a sensitive privacy issue) or some other magazine you do not want people to know you read. The right violated is your property right.
You may waive your property rights, intentionally or inadvertently. If you absent- mindedly leave the magazine on a park bench, someone could take it. If you leave it on the coffee table when you have guests at your home, someone could see it. If you read a pornographic magazine on a bus, and someone sees you and tells other people that you read dirty magazines, that person is not violating your rights. The person might be doing something impolite, unfriendly, or cruel, but not something that violates a right.
Our rights to our person and our bodies include the right to decide to whom we show various parts of our bodies. By walking around in public, most of us waive our right to prevent others from seeing our faces. When a Muslim woman covers her face, she is
102 Chapter 2 Privacy
exercising her right to keep others from viewing it. If someone uses binoculars to spy on us at home in the shower, they are violating our right to our person.
If someone beats on you to get some information, the beater is violating your right to be free from physical harm done by others. If the information is the time of day, privacy is not at issue. If the information is more personal, then they have compromised your privacy, but the right violated is your right to be free from attack. On the other hand, if a person peacefully asks whom you live with or what your political views are, they have violated no rights. If you choose to answer and do not make a confidentiality agreement, the person is not violating your rights by repeating the information to someone else, though it could be inconsiderate to do so. However, if the person agreed not to repeat the information, but then does, it does not matter whether or not the information was sensitive; the person is violating the confidentiality agreement.
In these examples, there is no violation of privacy without violation of some other right, such as the right to control our property or our person, the right to be free from violent attack, or the right to form contracts (and expect them to be enforced). Thomson concludes, “I suggest it is a useful heuristic device in the case of any purported violation of the right to privacy to ask whether or not the act is a violation of any other right, and if not whether the act really violates a right at all.”79
Criticisms of Warren and Brandeis and of Thomson
Critics of the Warren and Brandeis position80 argue that it does not provide a workable principle or definition from which to conclude that a privacy right violation occurs. Their notion of privacy is too broad. It conflicts with freedom of the press. It appears to make almost any unauthorized mention of a person a violation of the person’s right.
Critics of Thomson present examples of violations of a right to privacy (not just a desire for privacy), but of no other right. Some view Thomson’s notion of the right to our person as vague or too broad. Her examples might (or might not) be a convincing argument for the thesis that considering other rights can resolve privacy questions, but no finite number of examples can prove such a thesis.
Neither article directly refutes the other. Their emphases are different. Warren and Brandeis focus on the use of the information (publication). Thomson focuses on how it is obtained. This distinction sometimes underlies differences in arguments by those who advocate strong legal regulations on use of personal data and those who advocate more reliance on technical, contractual, and market solutions.
Applying the theories
How do the theoretical arguments apply to privacy and personal data today? Throughout Warren and Brandeis, the objectionable action is publication of personal
information—its widespread, public distribution. Many court decisions since the appear- ance of their article have taken this point of view.81 If someone published information from a consumer databases (in print or by making it public on the Web), that would
2.5 Protecting Privacy: Technology, Markets, Rights, and Laws 103
violate the Warren and Brandeis notion of privacy. A person might win a case if some- one published his or her consumer profile. But intentional publication is not the main concern in the current context of consumer databases, monitoring of Web activity, loca- tion tracking, and so on. The amount of personal information collected nowadays might appall Warren and Brandeis, but their article allows disclosure of personal information to people who have an interest in it. By implication, they do not preclude, for example, disclosure of a person’s driving record to a car rental company from which he or she wants to rent a car. Similarly, it seems Warren and Brandeis would not oppose disclosure of information about whether someone smokes cigarettes to a life insurance company from whom the person is trying to buy insurance. Their view does not rule out use of (un- published) consumer information for targeted marketing, though they probably would disapprove of it.
The content of social networks would probably shock and appall Warren and Bran- deis. Their position would severely restrict the sharing of photos that include other people and of the location and activities of friends.
An important aspect of both the Warren and Brandeis paper and the Thomson paper is that of consent. They see no privacy violation if a person consented to the collection and use of the information.
Transactions
We have another puzzle to consider: how to apply philosophical and legal notions of privacy to transactions, which automatically involve more than one person. The following scenario will illustrate the problem.
One day in the small farm community of Friendlyville, Joe buys five pounds of potatoes from Maria, who sells him the five pounds of potatoes. (I describe the transaction in this repetitious manner to emphasize that there are two people involved and two sides to the transaction.)
Either Joe or Maria might prefer the transaction to remain secret. The failure of his own potato crop might embarrass Joe. Or Joe might be unpopular in Friendlyville, and Maria fears the townspeople will be angry at her for selling to him. Either way, we are not likely to consider it a violation of the other’s rights if Maria or Joe talks about the purchase or sale of the potatoes to other people in town. But suppose Joe asks for confidentiality as part of the transaction. Maria has three options. (1) She can agree. (2) She can say no; she might want to tell people she sold potatoes to Joe. (3) She can agree to keep the sale confidential if Joe pays a higher price. In the latter two cases, Joe can decide whether to buy the potatoes. On the other hand, if Maria asks for confidentiality as part of the transaction, Joe has three options. (1) He can agree. (2) He can say no; he might want to tell people he bought potatoes from Maria. (3) He can agree to keep the purchase confidential if Maria charges a lower price. In the latter two cases, Maria can decide whether to sell the potatoes.
104 Chapter 2 Privacy
Privacy includes control of information about oneself. Is the transaction a fact about Maria or a fact about Joe? There does not appear to be a convincing reason for either party to have more right than the other to control information about the transaction. Yet this problem is critical to legal policy decisions about use of consumer information. If we are to assign control of the information about a transaction to one of the parties, we need a firm philosophical foundation for choosing which party gets it. (If the parties make a confidentiality agreement, then they have an ethical obligation to respect it. If the agreement is a legal contract, then they have a legal obligation to respect it.)
Philosophers and economists often use simple two-person transactions or relation- ships, like the Maria/Joe scenario, to try to clarify the principles involved in an issue. Do the observations and conclusions about Maria and Joe generalize to large, complex societies and a global economy, where, often, one party to a transaction is a business? All transactions are really between people, even if indirectly. So if a property right or a privacy right in the information about a transaction goes to one of the parties, we need an argument showing how the transaction in a modern economy is different from the one in Friendlyville. Later in this section, we describe two viewpoints on the regulation of infor- mation about consumer transactions: the free market view and the consumer protection view. The consumer protection view suggests treating the parties differently.
Ownership of personal data
Some economists, legal scholars, and privacy advocates propose giving people property rights in information about themselves. The concept of property rights can be useful even when applied to intangible property (intellectual property, for example), but there are problems in using this concept for personal information. First, as we have just seen, activities and transactions often involve at least two people, each of whom would have reasonable but conflicting claims to own the information about the transaction. Some personal information does not appear to be about a transaction, but there still can be problems in assigning ownership. Do you own your birthday? Or does your mother own it? After all, she was a more active participant in the event.
The second problem with assigning ownership of personal information arises from the notion of owning facts. (Copyright protects intellectual property such as computer programs and music, but we cannot copyright facts.) Ownership of facts would severely impair the flow of information in society. We store information on electronic devices, but we also store it in our minds. Can we own facts about ourselves without violating the freedom of thought and freedom of speech of others?
Although there are difficulties with assigning ownership in individual facts, another issue is whether we can own our “profiles,” that is, a collection of data describing our activities, purchases, interests, and so on. We cannot own the fact that our eyes are blue, but we do have the legal right to control some uses of our photographic image. In almost all states, we need a person’s consent to use his or her image for commercial purposes. Should the law treat our consumer profiles the same way? Should the law treat the collection of
2.5 Protecting Privacy: Technology, Markets, Rights, and Laws 105
our search queries the same way? How can we distinguish between a few facts about a person and a “profile”?
Judge Richard Posner, a legal scholar who has extensively studied the interactions between law and economics, gives economic arguments about how to allocate property rights to information.82 Information has both economic and personal value, he points out. It is valuable to us to determine if a business, customer, client, employer, employee, and so on, is reliable, honest, and so on. Personal and business interactions have many opportunities for misrepresentation and therefore exploitation of others. Posner’s analysis leads to the conclusion that, in some cases, individuals or organizations should have a property right to information, while in other cases, they should not. That is, some infor- mation should be in the public domain. A property right in information is appropriate where the information has value to society and is expensive to discover, create, or collect. Without property rights to such information, the people or businesses that make invest- ments in discovering or collecting the information will not profit from it. The result is that people will produce less of this kind of information, to the detriment of society. Thus, the law should protect, for example, trade secrets, the result of much expenditure and effort by a business. A second example is personal information, such as the appearance of one’s naked body. It is not expensive for a person to obtain, but virtually all of us place value on protecting it, and concealment is not costly to society. So it makes sense to assign the property right in this information to the individual. Some privacy advocates want to protect information that can lead to denial of a job or some kind of service or contract (e.g., a loan). They advocate restrictions on sharing of information that might facilitate negative decisions about people—for example, landlords sharing a database with infor- mation about tenant payment histories. Posner argues that a person should not have a property right to negative personal information or other information whose concealment aids people in misrepresentation, fraud, or manipulation. Such information should be in the public domain. That means a person should not have the right to prohibit others from collecting it, using it, and passing it on, as long as they are not violating a contract or confidentiality agreement and do not obtain the information by eavesdropping on private communications or by other prohibited means.
In recent decades, the trend in legislation has not followed Posner’s position. Some critics of Posner’s point of view believe that moral theory, not economic principles, should be the source of property rights.
A basic legal framework
A good basic legal framework that defines and enforces legal rights and responsibilities is essential to a complex, robust society and economy. One of its tasks is enforcement of agreements and contracts. Contracts—including freedom to form them and enforcement of their terms by the legal system—are a mechanism for implementing flexible and diverse economic transactions that take place over time and between people who do not know each other well or at all.
106 Chapter 2 Privacy
We can apply the idea of contract enforcement to the published privacy policies of businesses and organizations. The Toysmart case is an example. Toysmart, a Web-based seller of educational toys, collected extensive information on about 250,000 visitors to its website, including family profiles, shopping preferences, and names and ages of children. Toysmart had promised not to release this personal information. When the company filed for bankruptcy, it had a large amount of debt and virtually no assets—except its customer database, which had a high value. Toysmart’s creditors wanted the database sold to raise funds to repay them. Toysmart offered the database for sale, causing a storm of protest. Consistent with the interpretation that Toysmart’s policy was a contract with the people in the database, the bankruptcy-court settlement included destruction of the database.83
A second task of a legal system is to set defaults for situations that contracts do not explicitly cover. Suppose a website posts no policy about what it does with the information it collects. What legal rights should the operator of the site have regarding the information? Many sites and offline businesses act as though the default is that they can do anything they choose. A privacy-protecting default would be that they can use the information only for the direct and obvious purpose for which they collected it. The legal system can (and does) set special confidentiality defaults for sensitive information, such as medical and financial information, that tradition and most people consider private. If a business or organization wants to use information for purposes beyond the default, it would have to specify those uses in its policies, agreements, or contracts or request consent. Many business interactions do not have written contracts, so the default provisions established by law can have a big impact.
A third task of a basic legal structure is to specify penalties for criminal offenses and breach of contracts. Thus, law can specify penalties for violation of privacy policies and negligent loss or disclosure of personal data that businesses and others hold. Writers of
More about liability issues: Section 8.3.3
liability laws must strike a balance between being too strict and too lenient. If too strict, they make some valuable products and services too expensive to provide. If too weak, they provide insufficient incentive
for businesses and government agencies to provide reasonable security for our personal data.
Regulation
Technical tools, market mechanisms, and business policies for privacy protection are not perfect. Is that a strong argument for regulatory laws? Regulation is not perfect either. We must evaluate regulatory solutions by considering effectiveness, costs and benefits, and side effects, just as we evaluate other kinds of potential solutions to problems caused by technology. The pros and cons of regulation fill entire books. We briefly make a few points here. (We will see similar problems in Section 8.3.3 when we consider responses to computer errors and failures.)
There are hundreds of privacy laws. When Congress passes laws for complex areas like privacy, the laws usually state general goals and leave the details to government agencies
2.5 Protecting Privacy: Technology, Markets, Rights, and Laws 107
that write hundreds or thousands of pages of regulations, sometimes over many years. It is extremely difficult to write reasonable regulations for complex situations. Laws and regulations often have unintended effects or interpretations. They can apply where they do not make sense or where people simply do not want them.
Regulations often have high costs, both direct dollar costs to businesses (and, ulti- mately, consumers) and hidden or unexpected costs, such as loss of services or increased inconvenience. For example, regulations that prohibit broad consent agreements and in- stead require explicit consent for each secondary use of personal information have an attribute economists call “high transaction cost.” The consent requirement could be so expensive and difficult to implement that it eliminates most secondary uses of informa- tion, including those that consumers find desirable.
Although regulations have disadvantages, we should remember that businesses some- times overestimate the cost of privacy regulations. They also sometimes underestimate the costs, to themselves and to consumers, of not protecting privacy.84
Contrasting Viewpoints
When asked “If someone sues you and loses, should they have to pay your legal expenses?” more than 80% of people surveyed said “yes.” When asked the same question from the opposite perspective: “If you sue someone and lose, should you have to pay their legal expenses?” about 40% said “yes.”
The political, philosophical, and economic views of many scholars and advocates who write about privacy differ. As a result, their interpretations of various privacy problems and their approaches to solutions often differ, particularly when they are considering laws and regulation to control collection and use of personal information by businesses.� We contrast two perspectives. I call them the free market view and the consumer protection view.
The free market view
People who prefer market-oriented solutions for privacy problems tend to emphasize the freedom of individuals, as consumers or in businesses, to make voluntary agreements; the diversity of individual tastes and values; the flexibility of technological and market solutions; the response of markets to consumer preferences; the usefulness and importance of contracts; and the flaws of detailed or restrictive legislation and regulatory solutions. They emphasize the many voluntary organizations that provide consumer education, develop guidelines, monitor the activities of business and government, and pressure
� There tends to be more agreement among privacy advocates when considering privacy threats and intrusions by government.
108 Chapter 2 Privacy
businesses to improve policies. They may take strong ethical positions but emphasize the distinction between the role of ethics and the role of law.
A free market view for collection and use of personal information emphasizes informed consent: Organizations collecting personal data (including government agencies and businesses) should clearly inform the person providing the information if they will not keep it confidential (from other businesses, individuals, and government agencies) and how they will use it. They should be legally liable for violations of their stated policies. This viewpoint could consider truly secret forms of invisible information gathering to be theft or intrusion.
A free market view emphasizes freedom of contract: People should be free to enter agreements (or not enter agreements) to disclose personal information in exchange for a fee, services, or other benefits according to their own judgment. Businesses should be free to offer such agreements. This viewpoint respects the right and ability of consumers to make choices for themselves based on their own values. Market supporters expect con- sumers to take the responsibility that goes with freedom—for example, to read contracts or to understand that desirable services have costs. A free market view includes free flow of information: the law should not prevent people (or businesses and organizations) from using and disclosing facts they independently or unintrusively discover without violating rights (e.g., without theft, trespass, or violation of contractual obligations).
We cannot always expect to get exactly the mix of attributes we want in any product, service, or job. Just as we might not get cheeseless pizza in every pizza restaurant or find a car with the exact set of features we want, we might not always be able to get both privacy and special discounts—or free services. We might not be able to get certain websites—or magazines—without advertising, or a specific job without agreeing to provide certain personal information to the employer. These compromises are not unusual or unreasonable when interacting with other people.
Market supporters prefer to avoid restrictive legislation and detailed regulation for several reasons. Overly broad, poorly designed, and vague regulations stifle innovation. The political system is a worse system than the market for determining what consumers want in the real world of trade-offs and costs. It is impossible for legislators to know in advance how much money, convenience, or other benefits people will want to trade for more or less privacy. Businesses respond over time to the preferences of millions of consumers expressed through their purchases. In response to the desire for privacy many people express, the market provides a variety of privacy protection tools. Market supporters argue that laws requiring specific policies or prohibiting certain kinds of contracts violate the freedom of choice of both consumers and business owners.
This viewpoint includes legal sanctions for those who steal data and those who violate confidentiality agreements. It holds businesses, organizations, and government agents responsible for loss of personal data due to poor or negligent security practices. To encourage innovation and improvement, advocates of this viewpoint are more likely to prefer penalties when a company loses, inappropriately discloses, or abuses the data, rather
2.5 Protecting Privacy: Technology, Markets, Rights, and Laws 109
than regulations that specify detailed procedures that holders of personal information must follow.
The free market viewpoint sees privacy as a “good,” both in the sense that it is desirable and that it is something we can obtain varying amounts of by buying or trading in the economy, like food, entertainment, and safety. Just as some people choose to trade some safety for excitement (bungee jumping, motorcycle riding), money (buying a cheaper but less safe product), or convenience, some choose different levels of privacy. As with safety, law can provide minimum standards, but it should allow the market to provide a wide range of options to meet the range of personal preferences.
The consumer protection view
Advocates of strong privacy regulation emphasize the unsettling uses of personal informa- tion we have mentioned throughout this chapter, the costly and disruptive results of errors in databases (which we discuss in Chapter 8) and the ease with which personal informa- tion leaks out, via loss, theft, and carelessness. They argue for more stringent consent requirements, legal restrictions on consumer profiling, prohibitions on certain types of contracts or agreements to disclose data, and prohibitions on businesses collecting or stor- ing certain kinds of data. They urge, for example, that the law require companies to have opt-in policies for secondary uses of personal information, because the opt-out option might not be obvious or easy enough for consumers who would prefer it. They would prohibit waivers and broad consent agreements for secondary uses.
The focus of this viewpoint is to protect consumers against abuses and carelessness by businesses and against their own lack of knowledge, judgment, or interest. Advocates of the consumer protection view emphasize that people do not realize all the ways others may use information about them. They do not understand the risks of agreeing to disclose personal data. Those who emphasize consumer protection are critical of programs to trade free devices and services for personal information or consent for monitoring or tracking. Many support laws prohibiting collection or storage of personal data that could have negative consequences, if they believe the risks are more important than the value of the information to the businesses that want to collect it. Consumer advocate and privacy “absolutist” Mary Gardiner Jones objected to the idea of consumers consenting to dissemination of personal data. She said, “You can’t expect an ordinary consumer who is very busy trying to earn a living to sit down and understand what [consent] means. They don’t understand the implications of what use of their data can mean to them.”85
She said this roughly 20 years ago. Understanding the implications of the ways data are collected and used now is more difficult. A former director of the ACLU’s Privacy and Technology Project expressed the view that informed consent is not sufficient protection. She urged a Senate committee studying confidentiality of health records to “re-examine the traditional reliance on individual consent as the linchpin of privacy laws.”86
Those who emphasize the consumer protection point of view would argue that the Joe/Maria scenario in Friendlyville, described earlier in this section, is not relevant in a
110 Chapter 2 Privacy
complex society. The imbalance of power between the individual and a large corporation is one reason. Another is that in Friendlyville the information about the transaction circulates to only a small group of people, whom Joe and Maria know. If someone draws inaccurate or unfair conclusions, Joe or Maria can talk to the person and present his or her explanations. In a larger society, information circulates among many strangers, and we often do not know who has it and what decisions about us they base on it.
A consumer cannot realistically negotiate contract terms with a business. At any specific time, the consumer can only accept or reject what the business offers. And the consumer is often not in a position to reject it. If we want a loan for a house or car, we have to accept whatever terms lenders currently offer. If we need a job, we are likely to agree to disclose personal information against our true preference because of the economic necessity of working. Individuals have no meaningful power against large companies like Google and Apple. They have to use search engines whether or not they know or accept a company’s policy about use of their search queries.
In the consumer protection view, self-regulation by business does not work. Business privacy policies are weak, vague, or difficult to understand. Businesses sometimes do not follow their stated policies. Consumer pressure is sometimes effective, but some companies ignore it. Instead, we must require all businesses to adopt pro-privacy policies. Software and other technological privacy-protecting tools for consumers cost money, and many people cannot afford them. They are far from perfect anyway and hence not good enough to protect privacy.
The consumer protection viewpoint sees privacy as a right rather than something we bargain about. For example, a website jointly sponsored by the Electronic Privacy Information Center and Privacy International flashes the slogans “Privacy is a right, not a preference” and “Notice is not enough.”87 The latter indicates that they see privacy as a positive right, or claim right (in the terminology of Section 1.4.2). As a negative right, privacy allows us to use anonymizing technologies and to refrain from interacting with those who request information we do not wish to supply. As a positive right, it means we can stop others from communicating about us. A spokesperson for the Center for Democracy and Technology expressed that view in a statement to Congress, saying that we must incorporate into law the principle that people should be able to “determine for themselves when, how and to what extent information about them is shared.”88
2.5.3 Privacy Regulations in the European Union
The European Union (EU) has a comprehensive Data Protection Directive (passed in 1995).89 It covers processing of personal data, including collection, use, storage, retrieval, transmission, destruction, and other actions. The directive sets forth Fair Information Principles that EU member nations must implement in their own laws. Several are similar to the first five principles in Figure 2.2 (in Section 2.1.3). The EU has some additional or stronger rules. They include:
2.5 Protecting Privacy: Technology, Markets, Rights, and Laws 111
. Processing of data is permitted only if the person has consented unambiguously or if the processing is necessary to fulfill contractual or legal obligations or is needed for tasks in the public interest or by official authorities to accomplish their tasks (or a few other reasons).
. Special categories of data—including ethnic and racial origin, political and reli- gious beliefs, health and sex life, and union membership—must not be processed without the person’s explicit consent. Member nations may outlaw processing of such data even if the subject does consent.
. Processing of data about criminal convictions is severely restricted.
The EU’s rules are stricter than those in the United States, as the next few examples illustrate.
Google modified its privacy policy in 2012 to allow the company to combine in- formation it collects on members from its various services. The EU argued that average users could not understand how Google uses their data under the new policy and that that violates the EU’s privacy regulations. A court in Germany said that some of Face- book’s policies in its member agreement (for example, granting Facebook a license to use material a member posts or stores at Facebook) are illegal there. The German government told Facebook to stop running face recognition applications on German users; it violates German privacy laws.
The EU devised legal guidelines for social networking sites. The guidelines say the sites should set default privacy settings at a high level, tell users to upload a picture of a person only if the person consents, allow the use of psuedonyms, set limits on the time they retain data on inactive users, and delete accounts that are inactive for a long time.
The European Commission proposed granting a legal “right to be forgotten.” It would, among other things, require that a website remove information, photos, and so on, of a particular person if that person requests it, whether that person posted the material
More about a right to be forgotten: Section 2.3.4
or someone else did. It appears also to require that search engines remove links to material a person wants removed. Such a “right” clearly conflicts with freedom of speech in cases where another person posted
the material and does not want it removed. A Spanish government agency ordered Google to remove links from its search results to
dozens of articles that have sensitive information about individual people. (Google fought the demand in European court, arguing that the order violated freedom of expression and that the government did not require news media to remove the articles.) Because of Germany’s strict privacy laws, Google’s Street View allowed anyone to request that their home or office be blurred out on its street images. Google won a lawsuit about Street View violating a homeowner’s privacy, but the company discontinued taking photos for Street View in Germany.90
While the EU has much stricter regulations than the United States on collection and use of personal information by the private sector, some civil libertarians believe that the
112 Chapter 2 Privacy
regulations do not provide enough protection from use of personal data by government agencies. Although the directive says that data should not be kept longer than necessary, European countries require that ISPs and telephone companies retain records of customer communications (date, destination, duration, and so on) for up to two years and make them available to law enforcement agencies. The EU said it needs this requirement to fight terrorism and organized crime.91
The EU’s strict privacy directive does not prevent some of the same abuses of per- sonal data that occur in the United States. In Britain, for example, the Information Commissioner reported that data brokers use fraud and corrupt insiders to get personal information. As in the United States, customers of illegal services include journalists, pri- vate investigators, debt collectors, government agencies, stalkers, and criminals seeking data to use for fraud.92
The EU Data Privacy Directive prohibits transfer of personal data to countries outside the European Union that do not have an adequate system of privacy protection. This part of the directive caused significant problems for companies that do business both in and outside Europe and might normally process customer and employee data outside the EU. The EU determined that Australia, for example, did not have adequate privacy protection. Australia allows businesses to create their own privacy codes consistent with the government’s National Privacy Principles. The United States has privacy laws covering specific areas such as medical information, video rentals, driver’s license records, and so on, but does not have comprehensive privacy laws covering all personal data. The EU agreed to the “Safe Harbor” plan, under which companies outside the EU that agree to abide by a set of privacy requirements similar to the principles in the Data Protection Directive may receive personal data from the EU.93 After the terrorist attacks in 2001, screening of air travel passengers from Europe to the United States raised problems. The U.S. government wanted more information about the passengers than the EU wanted to provide.
Many privacy advocates describe U.S. privacy policy as “behind Europe” because the United States does not have comprehensive federal legislation regulating personal data collection and use. Others point out that the United States and Europe have different cultures and traditions. European countries tend to put more emphasis on regulation and centralization, especially concerning commerce, whereas U.S. tradition puts more emphasis on contracts, consumer pressure, flexibility and freedom of the market, and penalties for abuses of personal information by enforcement of existing laws (such as those against deceptive and unfair business practices).
2.6 Communications

We are experienced and have access to ample research materials to guarantee quality A+ papers. Our Professional Writers can handle all subjects and deliver your papers on time.

For any questions, feedback, or comments, we have an ethical customer support team that is always waiting on the line for your inquiries.
Send us an E-mail: support@homeworkhandlers.com or Call us: +1 (518)-934-6741