The U.S. Senate was trying to "hotline", or pass without floor debate, the Wired for Health Care Quality Act. This act is designed to provide grants and support for the adoption of health information technology. Amongst the many criticisms of the act, the main were:
Failure to incorporate adequate patient privacy protections
Lack of funding ($278 million) to support the scope of the act
Justice Jack Carter issued an opinion in Essent v. Doe that supports plaintiffs should not gain access to the identifying information of anonymous "speakers" (think bloggers) for whom the plaintiff wants to sue for damages without providing "sufficient evidence" that the plaintiff could win at trial. The support for this opinion is based on the right to freedom of speech as provided by the First Amendment.
This stems from a case involving the Paris Regional Medical Center (PRMC), where an anonymous blogger (we assume one) posted detailed information of problems at the hospital. (See The-Paris-Site) The operator of PRMC claimed defamation as well as disclosure of patient-related information and attempted to discover the identity of the blogger through various means.
Though the court prevented the revelation of the identity at this time, the court has provided the operator with time to submit their argument of sufficient evidence. Stay tuned.
Several weeks ago, I was in Washington, DC, where I was a panelist at a workshop of the National Academies on voter registration database. The focus of the panel was on the privacy and security issues associated with electronic voter registration records. In particular, we were asked to address three questions:
1. What principles should guide security decisions? How might these apply to voter registration databases?
2. What privacy considerations need to be taken into account, especially with the impact of combining and linking data?
3. What standard, adversarial test could be applied against each state's database? What would you include in such a test?
It was a lively panel with Peter Neumann (SRI), Glenn Newkirk (InfoSENTRY Services) Jim Horning (SPARTA, Inc), and myself. The focus of my talk was on privacy issued inherent in sharing personal information both in the public and the private setting, as well, as the privacy-violating inferences that can be gleaned from voter histories.
Good news. I was recently named a Stahlman Scholar by the Vanderbilt Center for Biomedical Ethics and Society (CBES). As many of you know, my research in data privacy is driven both from technical and social perspectives (especially due to the fact that privacy is an inherently social phenomenon). A large portion of my research has focused on the development of algorithms and software to re-identify seemingly anonymous health information, as well as provably protect health information shared for secondary research purposes. That said, my work has had little in terms of the ethics or social justice concerns regarding re-identification and protection technologies. The CBES award is to support an investigation into how re-identification technologies affect the scientific community and public at large. For example, one question that this work will look into is the following. Imagine that you develop a technology that can re-identify person-specific health information in a public repository. What should you do? Should you notify the individuals whose records you have re-identified? What about the organization that posted the records? Should you publish your methods to help advise other organizations on the pitfalls associated with "protecting" their records in the same way as the organization whose records you compromised?
In a sense, this work is similar to studies in the ethical hacking community. However, the problem is quite different because in a hacking environment, we are normally talking about systems or computer security. And, when you find a "hole" in the security, you can notify the owners the affected systems and post a patch (ala Microsoft's extreme programming model). Yet, when we consider privacy and re-identification issues, we have to recognize that data is public and may be used by many people for legitimate purposes and many times the privacy vulnerability are not due to a single location's negligence. Rather, multiple organizations disclose information that in combination lead to a failure of protection in the system. Thus, none of the organizations violated the law, and none may even be accountable for their actions, but the system is still broken.
Last week, I had the pleasure of attending and speaking at the 3rd Electronic Health Information and Privacy Conference. The conference was overview of work in health privacy from many different perspectives, including behavioral economics (thanks to an excellent presentation of recent research by Alessandro Acquisi), data privacy, law, and policy. Though there was almost a foot of snow on the ground, the workshop was a tremendous success. They had to shuffle around sessions due to various delays at the airports (especially in Toronto) and the roads, but only one session was lost. In general, the majority of the conference focused on privacy and the challenges from a Canadian perspective. Personally, I found it amazing (and refreshing) how many of our neighbors to the north are thinking about technological and social issues.
Thanks to Khaled El Emam at the University of Ottawa for organizing a fantastic day of talks and discussion.
I just read an interesting article in Wired on Silicon Valley startup 23andme. The company aims to provide personalized predispositions based on single nucleotide polymorphisms. In short, here's the process: you send them your saliva, they sequence it, map the resulting SNP variants to the existing translational and clinical literature, and then provide you with a web accessible summary of your predispositions. It appears they have a crack team of biomedical and computational advisors. It's certainly something to keep an eye on