The U.S. Senate was trying to "hotline", or pass without floor debate, the Wired for Health Care Quality Act. This act is designed to provide grants and support for the adoption of health information technology. Amongst the many criticisms of the act, the main were:
Failure to incorporate adequate patient privacy protections
Lack of funding ($278 million) to support the scope of the act
Justice Jack Carter issued an opinion in Essent v. Doe that supports plaintiffs should not gain access to the identifying information of anonymous "speakers" (think bloggers) for whom the plaintiff wants to sue for damages without providing "sufficient evidence" that the plaintiff could win at trial. The support for this opinion is based on the right to freedom of speech as provided by the First Amendment.
This stems from a case involving the Paris Regional Medical Center (PRMC), where an anonymous blogger (we assume one) posted detailed information of problems at the hospital. (See The-Paris-Site) The operator of PRMC claimed defamation as well as disclosure of patient-related information and attempted to discover the identity of the blogger through various means.
Though the court prevented the revelation of the identity at this time, the court has provided the operator with time to submit their argument of sufficient evidence. Stay tuned.
Several weeks ago, I was in Washington, DC, where I was a panelist at a workshop of the National Academies on voter registration database. The focus of the panel was on the privacy and security issues associated with electronic voter registration records. In particular, we were asked to address three questions:
1. What principles should guide security decisions? How might these apply to voter registration databases?
2. What privacy considerations need to be taken into account, especially with the impact of combining and linking data?
3. What standard, adversarial test could be applied against each state's database? What would you include in such a test?
It was a lively panel with Peter Neumann (SRI), Glenn Newkirk (InfoSENTRY Services) Jim Horning (SPARTA, Inc), and myself. The focus of my talk was on privacy issued inherent in sharing personal information both in the public and the private setting, as well, as the privacy-violating inferences that can be gleaned from voter histories.
Good news. I was recently named a Stahlman Scholar by the Vanderbilt Center for Biomedical Ethics and Society (CBES). As many of you know, my research in data privacy is driven both from technical and social perspectives (especially due to the fact that privacy is an inherently social phenomenon). A large portion of my research has focused on the development of algorithms and software to re-identify seemingly anonymous health information, as well as provably protect health information shared for secondary research purposes. That said, my work has had little in terms of the ethics or social justice concerns regarding re-identification and protection technologies. The CBES award is to support an investigation into how re-identification technologies affect the scientific community and public at large. For example, one question that this work will look into is the following. Imagine that you develop a technology that can re-identify person-specific health information in a public repository. What should you do? Should you notify the individuals whose records you have re-identified? What about the organization that posted the records? Should you publish your methods to help advise other organizations on the pitfalls associated with "protecting" their records in the same way as the organization whose records you compromised?
In a sense, this work is similar to studies in the ethical hacking community. However, the problem is quite different because in a hacking environment, we are normally talking about systems or computer security. And, when you find a "hole" in the security, you can notify the owners the affected systems and post a patch (ala Microsoft's extreme programming model). Yet, when we consider privacy and re-identification issues, we have to recognize that data is public and may be used by many people for legitimate purposes and many times the privacy vulnerability are not due to a single location's negligence. Rather, multiple organizations disclose information that in combination lead to a failure of protection in the system. Thus, none of the organizations violated the law, and none may even be accountable for their actions, but the system is still broken.
Last week, I had the pleasure of attending and speaking at the 3rd Electronic Health Information and Privacy Conference. The conference was overview of work in health privacy from many different perspectives, including behavioral economics (thanks to an excellent presentation of recent research by Alessandro Acquisi), data privacy, law, and policy. Though there was almost a foot of snow on the ground, the workshop was a tremendous success. They had to shuffle around sessions due to various delays at the airports (especially in Toronto) and the roads, but only one session was lost. In general, the majority of the conference focused on privacy and the challenges from a Canadian perspective. Personally, I found it amazing (and refreshing) how many of our neighbors to the north are thinking about technological and social issues.
Thanks to Khaled El Emam at the University of Ottawa for organizing a fantastic day of talks and discussion.
I just read an interesting article in Wired on Silicon Valley startup 23andme. The company aims to provide personalized predispositions based on single nucleotide polymorphisms. In short, here's the process: you send them your saliva, they sequence it, map the resulting SNP variants to the existing translational and clinical literature, and then provide you with a web accessible summary of your predispositions. It appears they have a crack team of biomedical and computational advisors. It's certainly something to keep an eye on
A NJ hospital recently discovered (or just revealed) that more than twenty employees inappropriately accessed George Clooney's electronic medical record. Personally, I don't find this surprising, but I'm amazed that the hospital didn't have any protections in place for VIPs (How about a pseudonym?). Though the incident occurred, let's not be too quick to jump to conclusions about the hospital's data management or data access policies. The story is still unfolding.
If you keep up with the popular news outlets, then you've seen the fanfare associated with Microsoft's new online personal health record system, HealthVault. If not, here are several links to NY Times stories:
Researchers at RPI are using de-identified DNA, with no additional family history to predict ancestral heritage. It basically uses a combination of single nucleotide polymorphisms (SNPs) in a classification algorithm (apologies, but I haven't taken the time to look into the algorithm yet).
Recently, Senators Leahy and Kennedy introduced the Health Information Privacy and Security Act to the Senate.
Title I of the bill, "Individual Rights," guarantees an individual's right to supplement, amend, correct, or destroy any of their protected health information maintained or stored by an entity. It also would require entities maintaining, accessing, using, or storing protected health information to provide the individual with a notice of privacy rights and practices, and notify individuals when data corruption or loss of health information is discovered.
Title II of the bill, "Restrictions on Use and Disclosure," includes requirements on groups seeking to disclose protected health information to obtain a signed, written authorization from an individual in connection with any treatment, payment, or other purpose.
Also, individuals must be provided with notification in the case of an actual or attempted security breach if there is at least a "reasonable belief" that protected health information concerning the patient was accessed or acquired during the breach.
Jane Gross, at the NY Times, penned a recent story on the ways in which various health organizations (and their employees) misinterpret the HIPAA Privacy Rule. It's filled with annecdotal evidence, but it does highlight some of the confusions and challenges with implementing the privacy rule correctly.
In January 2007, the Government Accountability Office issued a report on the federal government's lack of an overall privacy framework for a national health information infrastructure. They summarized the report in a one-pager this morning. In summary, the following challenges were recognized:
Resolve legal / policy issues
Ensure appropriate disclosure
Enable an individual's right to access / amend health record
Integrate security measures for electronic health information
According to a press release, Ancestry.com is teaming up Sorenson Genomics, so that people can add the results of various genetic tests to their online family tree websites. I don't want to scream fire in a movie theater, especially since I don't know what the "DNA results" will correspond to, but at the same time, you have to wonder where the oversight is, who will have access to this information, if full-family consent will be necessary, ... the questions are endless.
Though not a critical blow, the resignation of Paul Feldman is clearly an indication that the definition and integration of privacy policies in the coming national healthcare infrastructure are lacking.
CNN is reporting that the FBI has admitted to losing laptops with classified, as well as identifying, information. Now, there's no proof that the information in these laptops was, or can be, accessed by the culprits. However, the same can not be said for the stolen firearms...
From Government Computer News The Department of Homeland Security (DHS) believe that governments and companies need to share biometric data internationally to assist in anti-terrorism. Privacy advocates and biometrics professionals feel are pushing back given the current state of privacy controls, policies, and lack of international oversight.
Unfortunately, this issue has played out in the ethical, as opposed to the technical (or better yet - a combination of the two), aspects. The DHS views this as a "We already have the technology to capture and share the information, so why shouldn't we?" Yet, a more helpful way to phrase the problem might be "How can we share biometric data to achieve surveillance and anti-terrorism goals without compromising the privacy of the majority of those who's data is surveilled?"
Two words. So different in meaning, and yet, people often make the semantic slip to substitute one for the other. It's amusing and an honest mistake. I know that I've done it before, but you might expect that the editor would catch the faux pax in the title of an article:
The GAO commended DHHS for creating advising committees on privacy and security topics, as well as drafting contracts that explicitly require the recipients to address privacy issues. Yet, the GAO is concerned that it is unclear how privacy protections will be developed and administered.
Back in August, it was reported that Blue Cross Blue Shield (BCBS) will share claims and health information on 79+ million people to emploers, drug companies, and other private and public organizations. A list of the BCBS providers can be found here.
In an interesting response, the head of the Patient Privacy Rights Foundation said "This move by the Blues reveals what Americans can expect from an electronic health system because they no longer have the right to control access to their medical records. Their sensitive health records will be used for corporate profits and in ways that can directly harm them."
But wait, let's think about this... did we ever have the right to control access to our medical records?
I agree that the unregulated disclosure of patient-specific information is potentially harmful to the enrollees of BCBS. Granted, until we know the control mechanisms that have been institute by the BCBS administration, we can't determine the extent to which patients are being put in harms way. If the appropriate safeguards are taken then it is possible to share patient-specific data with probable patient protections. Anyone know the details of this endeavor?
(from Med-Privacy Mailing list) In the three years since the enforcement provisions of the HIPAA Privacy Rule went into effect, more than 21,000 complaints alleging privacy violations have been filed with the Office for Civil Rights at HHS. Yet only 2 criminal cases have been filed and NO fines have been assessed in response to any of those complaints.
Congress’ intent in passing the HIPAA statute in 1996 was to create strong protections for patients' privacy. Yet ten years later, patients' most sensitive information is more exposed and vulnerable than ever before. Lax enforcement, inadequate penalty provisions and HHS amendments in 2002 turned HIPAA into an act that allows patients' most sensitive information to be shared without their permission and without penalty for improper use.
The New York Times ran an interesting story discussing breaches of confidentiality in electronic medical record systems. Worth a read.
"Dr. Craig Smith performed heart surgery on former President Bill Clinton two years ago at NewYork- Presbyterian Hospital. Computer hackers tried to get a peek at the famous patient’s electronic medical records."
Sure, prevent the hackers - but watch out for the insiders:
"The same hospital thwarted 1,500 unauthorized attempts by its own employees to look at the patient records of a famous local athlete"
In his state of the city address, New York City Mayor Michael Bloomberg said that the New York City 911 system will be upgraded to accept digital images from cell phones in addition to phone calls. So, everyone with a cell phone, click and send.
I wonder what the data use and retention policies are going to be. What is 911 going to do with the images? What are they permitted, legally, to do with the images? Where are they going to be stored and for how long? It's a bit disconcerting when you think about it, especially considering that cell phone cameras are crossing the public-private divide. Take my picture with a public webcam, such as in central park, please. I'm in a public area and the expectation of privacy is low. But take my picture someplace else, such as... oh, ... say the comfort of my living room and there may be some concerns.
Pretexting - the act of gaining data under false pretenses. Bills have been proposed to ban pretexting in the US Congress and California. It passed in California, but we'll have to wait until Congress reconvenes to find out if the law will gain national standing.
To clarify, the "California antipretexting law" (Senate Bill 202) will kick in Jan 1 and will cover telephone records of California citizens. The law prohibits the release of an individual's phone records to anyone but the original caller.