‘Modern’ data protection laws first appeared in 1970 (in Germany) as a response to the use of computers to process information about people. At that time, however, there were relatively few computers and most were in the public or academic sectors, and at a few large corporations. Moreover, these machines tended to be housed in secure locations without direct connections to the outside world. For regulators, the task of tracking and supervising the processing of personal data might have been a realistic objective initially, although not for long.
What’s wrong with privacy regulation today?
As for processing power, for comparative purposes, the Apollo Guidance Computer which was used to put men on the moon in 1969 was capable of processing up to 9,600 instructions per second. Fast-forward to 2007: a Blackberry smartphone can now process over 1 billion instructions per second and the IBM Blue Gene-L supercomputer can process 360 trillion instructions per second. Undoubtedly more important than this explosion in raw processing capability, today we have massive, and rapidly expanding, connectivity with a worldwide internet population of 1.25 billion and mobile phone subscriptions predicted to reach 3.25 billion by the end of 2007.
Yet if you look at most privacy legislation, in certain key respects we are stuck with offline mainframe concepts from almost four decades ago and a regulatory environment that is well past its use-by date. Take, for example, the legislation that was enacted by the 27 member states of the European Union to implement the 1995 Data Protection Directive. By 1995, internet e-mail was already widely used and, thanks largely to the development of Netscape’s browser software, use of the web was also becoming popular. Yet, although it had been subjected to five years of intense scrutiny, and at times heated debate, the 1995 directive contained provisions that were fundamentally incompatible with the architecture of the internet. In particular, the imposition of cumbersome controls on the export of personal data clearly conflicted with ubiquitous international data communications, especially in the context of unstructured systems such as internet e-mail. It may perhaps have made some sense in 1970 to expect a representative of a government department, university or major corporation to apply for an export licence before boarding a plane with a briefcase containing a magnetic tape with personal data recorded on it. However, as some of us pointed out at the time, it made no sense in 1995 to adopt a directive that would lead to analogous controls being imposed on millions of organisations in respect of e-mails being sent via the internet to any country outside the EU that did not provide an ‘adequate’ level of protection for personal data. The cumbersome transborder data flow rules have since been rendered even more absurd now that they can apply to the personal data stored in numerous Blackberries, mobile phones, PDAs and laptops that are carried by business travellers on flights out of the EU every day.
To make matters worse, when the European Commission attempted to simplify the compliance process by approving standard contract clauses for organisations to use as a framework for transfers of personal data, some two-thirds of the EU countries promptly destroyed the usability of those clauses by imposing bureaucratic obstacles by way of filing or approval requirements. A further attempt to introduce a form of selfregulation via ‘binding corporate rules’ has also been hampered significantly as a result of the adoption by a number of national regulators of an overly bureaucratic approval process. In addition to these restrictions on international data transfers, most EU countries still require businesses to submit filings (also called notifications or registrations) in relation to their in-country data processing activities. Since it is difficult to see how this bureaucratic obligation either protects the public or assists regulators in their work, in most cases it amounts to nothing more than a data tax by another name. In a number of countries, however, significant resources are focused by regulators on enforcing such filing requirements, as this is the primary means by which their offices are funded! Harsh though it may seem, in most of the EU, complete regulatory paralysis is avoided only owing to a combination of widespread non-compliance and because enforcement activity is extremely limited. This is certainly the case in relation to transborder data flows, for which the vast majority of organisations do not yet have appropriate arrangements in place. Even when businesses have attempted to instal a comprehensive contractual structure to cover data exports, only a very small minority have then gone on to make the requisite filings with all the relevant regulators. This is not particularly surprising given how unnecessarily cumbersome and bureaucratic the filing or approval process for international transfers has become in many EU countries.
Are privacy regulators prepared to change?
Despite widespread criticism of the EU directive from business organisations and independent commentators, in March 2007 the European Commission informed the European Parliament and the Council of the European Council that it considered that the directive ‘fulfils its original objectives’ and announced that it had no plans to amend the directive. This was a considerable disappointment to many, including a reformminded group of EU privacy regulators that had made little secret of its looking for an overhaul of the directive at the first available opportunity. One of the most outspoken, the UK Information Commissioner Richard Thomas, gave a provocative speech in Washington, DC (backed up with a press release) just two days after the European Commission’s statement that it had no intention to amend the directive. Calling for a ‘greater global consensus on privacy’, Thomas suggested that ‘European laws may need some revision to achieve a closer consensus’ and stressed that ‘the European Union [must] be ready to consider changes’. He has since been reported as stating in September 2007 at a meeting of the Data Protection Forum in London that the directive is ‘highly confusing and overly prescriptive’, that the European Commission’s review was ‘deplorably complacent’ and that it is time to start a debate on changing the directive.
Fortunately, Mr Thomas is not a lone voice in the regulatory community, either within the EU or beyond. That he has the backing of a wider group has been evident since the publication in November 2006 of a statement entitled ‘Communicating Data Protection and Making it More Effective’. Also known as the London Initiative, this document was developed from a speech given six months earlier by President of the French Data Protection Authority (CNIL) Alex Turk. The statement, which was a joint initiative of the CNIL, the European data protection supervisor and the UK information commissioner, with the support of the Canadian, German, Spanish, Italian, Dutch, New Zealand and Swiss regulators, called for a realistic review of the effectiveness of the work of each national privacy regulator. Among other things, they acknowledged:
We must all prioritise, especially by reference to the seriousness and likelihood of harm. We must primarily concentrate on the main risks which individuals are now facing and be careful not to be excessively rigid or purist on issues which do not deserve it. We must be ready for more pragmatism and more flexibility.
One practical area where privacy regulators as a group internationally are actively supporting innovation relates to technical standards for privacy. At the 2005 International Conference of Data Protection and Privacy Commissioners, they issued the Montreux Declaration calling on NGOs, such as business and consumer associations, to develop ‘standards based on or consistent with the fundamental principles of data protection’. The declaration also appealed to ‘hardware and software manufacturers to develop products and systems integrating privacy-enhancing technologies’. Two years later, in September 2007, the commissioners adopted various resolutions in support of the ISO’s privacy-related standards work. In addition, in the key area of identity management, the Ontario information and privacy commissioner Ann Cavoukian has been promoting a specific initiative known as the ‘Seven Privacy-Embedded Laws of Identity’.
Is the global business community ready for privacy 2.0?
Some of the recent calls for a new approach to privacy regulation have come from unexpected quarters. In the United States there is a long tradition of regulating privacy in a minimalist, piecemeal fashion and this has given rise to a patchwork quilt of state and federal laws covering a range of different issues. In 2006, partly in response to the challenges of dealing with incompatible state laws requiring businesses to publicise security breaches that might put individuals at risk of identity theft (commonly called ‘breach notification laws’), the Consumer Privacy Legislative Forum issued a statement calling for comprehensive harmonised federal consumer privacy legislation. The forum was started by eBay, Hewlett-Packard and Microsoft but by the time of the statement it had expanded to a dozen major corporations. The statement was significant for two reasons. First, it was a call for an omnibus federal privacy law. Second, it was initiated by businesses, not privacy activists, legislators or regulators.
More recently, Google has sparked an international debate about privacy standards via a speech made at Unesco in September 2007 by Peter Fleischer, Google’s global privacy counsel. This was followed up a few days later with an article by Eric Schmidt, Google’s CEO, which was published in the Financial Times and elsewhere. Fleischer asked how we might best ‘update privacy concepts for the Information Age’ and called for the creation of ‘minimum standards of privacy protection that meet the expectations and demands of consumers, businesses and governments’.
Rejecting the US approach to data as too fragmented and the EU model as too bureaucratic, he suggested that ‘the most promising foundation’ would be the Privacy Framework adopted by the members of the Asia-Pacific Economic Cooperation Forum (APEC). The Privacy Framework contains information privacy principles that overlap to a large extent with those found in the corresponding OECD guidelines, the Council of Europe convention and the EU directive. The APEC principles cover harm prevention, notice, collection limitation, restrictions on use, choice, integrity, security safeguards, access and correction, and, finally, accountability. This last principle includes the concept of consent or due diligence in relation to international transfers. Notably absent from the framework is the requirement to appoint an independent regulator. Moreover, the APEC principles are in general perceived as a somewhat weaker framework than that described in the OECD guidelines. Perhaps unsurprisingly, privacy activist groups such as the Electronic Privacy Information Center have been quick to criticise Google’s initiative as an attempt to promote support for a relatively weak privacy standard, which would give consumers less protection than they have in many countries today. It is undoubtedly the case that a global standard based on the APEC principles would, in certain respects, be less onerous for businesses than, for example, most national laws in the EU. However, as Fleischer rightly observes, the EU model is ‘too bureaucratic and inflexible’. Indeed, that is to put the case diplomatically. For the reasons given earlier, the EU’s model for privacy regulation, at least as currently implemented, has manifestly failed.
What might privacy 2.0 look like?
At the heart of any new model will remain a set of privacy principles designed to protect individuals and determine what businesses and other organisations may do with the personal data they collect and subsequently use for whatever purpose. Although there is clearly a debate to be had at the margins, there is widespread acceptance of the core principles of good information governance in relation to personal data which can be found in broadly similar form in the OECD guidelines and the Council of Europe convention. Although criticised by some as a weaker model, the more recent APEC principles add an important dimension with their focus on harm prevention. This principle has been recognised by a number of influential privacy regulators as a key driver in guiding their pragmatic enforcement strategies.
‘Privacy 1.0’ went badly off the rails by getting bogged down in bureaucratic processes. As a result, the focus in many jurisdictions shifted away from the individuals that privacy laws were supposed to protect and onto regulators and the organisations they are regulating. In Privacy 2.0, the focus should be firmly on individuals and the core principles relating to transparency (including meaningful information provision), data quality, choice (where appropriate), data security and remedies (such as correction of errors and compensation for actual harm). The right of individuals to find out whether information is held about them and to have access to that information is a core principle in the OECD guidelines, the Council of Europe convention, the EU directive and the APEC privacy framework.
This can be problematic in practice, however. The volume and complexity of information about individuals held by many organisations has exploded. For example, an employer may have an enormous amount of data relating to a long-term current or former employee, including perhaps hundreds of thousands of e-mails sent or received by, or containing references to that individual.
Software tools are now available which make it possible for e-mails to be extracted from systems on that basis. However, the EU directive’s apparently expansive concept of ‘personal data’ is currently interpreted very differently in different member states.
For example, in the UK following the Court Appeal judgment in Durant v FSA, it may be that most of the e-mails in the scenario just described will not need to be disclosed to the employee or ex-employee because they will not be deemed to contain ‘personal data’.
In some other member states, it may be that all of the e-mails will be deemed to contain personal data simply because they contain the individual’s name. Even in the UK, however, responding properly to this kind of ‘subject access request’ can be extremely burdensome for the organisation concerned. This is because it may need to sift through the e-mails and other data sources concerned to identify precisely which of them should be disclosed in full or in part. While some may be disclosable in full, others may not contain personal data, some may be privileged and some may need to be redacted to remove personal data relating to other individuals.
For organisations that receive subject access requests with a cross-border element, the current lack of harmonisation can be a nightmare. This may prove to be one of the more difficult areas for development of a next-generation privacy principle, but the status quo is unsustainable.
What is absolutely clear from the European experience is that bureaucratic red tape should be eliminated ruthlessly. For example, legislators or regulators wanting to maintain filing or registration systems should be made to demonstrate that such burdens on organisations really do benefit individuals. A possible compromise in that regard, maybe for a transitional period, might be to dispense with filing requirements where organisations can show that they have appropriate internal information governance structures in place. Interestingly, Sweden has recently followed the example of Germany in providing broad exemptions from such requirements for organisations that appoint a privacy officer.
In a Privacy 2.0 environment, I would expect to see regulators getting much more involved in preventing harm both via education and by taking targeted and coordinated enforcement action with the objective of discouraging future harm. Examples of areas where education initiatives might currently be appropriate would include practical guidance on how to manage profiles on social networking sites and how to minimise risks of identity theft and, if necessary, facilitate identity repair where identities have been compromised. As for targeted enforcement activity, this is an area where the UK information commissioner has already started to put into practice the risk-based approach to enforcement agreed as part of the London Initiative, for example by extracting undertakings from financial institutions in relation to alleged security breaches.
A difficult challenge for Privacy 2.0 will be ensuring that legislation and regulatory practice are, to the fullest extent possible, both technologically neutral and able to accommodate the very specific technical issues that will continue to arise. With the development of the semantic web (where it is expected that software agents will undertake transactions on our behalf), further deployment of location-based e-commerce services, and widespread adoption of RFID, to give just a few examples, it seems highly likely that technological developments will continue to challenge the ability of legislators and regulators to respond effectively.
While the basic principles of privacy regulation have proved robust and flexible, it may continue to be appropriate to add technology- specific overlays either in the form of legislative rules or regulatory guidance. Ideally, however, this should be done on a multilateral basis following informed international debate and not on a unilateral basis at a more local level. A recent example of the latter approach, to add to the US ‘patchwork quilt’, is the signature into law by Governor Arnold Schwarzenegger on 12 October 2007 of a Californian bill which from 1 January 2008 will make it illegal for anyone (eg, an employer) to force anyone else to have an identification device, such as an RFID chip, implanted under his or her skin.
How soon might we expect to see privacy 2.0?
There are two ways of looking at this. In an ideal scenario, it might be best to start again with all relevant stakeholders around a table to agree a new international convention, supported by the best possible technical standards and privacy-enhancing technologies. As many countries as possible would then be persuaded to sign and ratify that convention and implement it promptly and in a consistent fashion in their national laws. Thereafter, regulators worldwide would collaborate to ensure that they interpreted and enforced the rules in a seamless manner. The technology industry and all relevant service providers would also immediately facilitate the adoption of all approved privacy standards and PETs.
‘Reality 1.0’ (and probably all future versions) is a great deal messier than that! The emergence of Privacy 2.0, if it happens at all, is more likely to resemble the development of an open-source software product or, perhaps, a ‘wiki’ or other collaborative web 2.0 project.
There will be ongoing debates, and probably quite a few arguments, over design, scope, enhancements, implementation and so on. However, the good news is that in this looser sense the development process seems already to be under way with the starting gun having been fired with the recent calls for radical change that have emanated from both regulators and industry. Moreover, specific work has begun on ISO privacy standards and at least some national privacy regulators have made material changes already to their enforcement practices. There are promising signs that the recent momentum will be sustained.