The articles and court cases this week point to a common area of concern for our class: technology does not run itself. Most notably, while the article describing the exploitation of full body scanners in regards to a celebrity is of a light-hearted nature, its implications are grave. After just a single occurrence of misuse, the implementation of full body scanners must be reconsidered.
We so often refer to the "government" and "technology" as if they are autonomous entities acting upon us (humanity). But, it is a fact that the agents of these "entities" are just as human as their subjects. Therefore, technology has as much potential for malicious intent, corruption, and unlawfulness as politicians and pedestrians. The obvious suggestion is that there needs to be more legislation governing how new technologies (like the full body scanners) is to be used. Any infringement on these policies should result in legal penalties and accountability. Ostensibly, going to jail for misusing the scanners would be a better deterrent than the possibility of losing ones job. Technology is, obviously, evolving fast enough on its own, that a little more time spent on assuring that there are product-specific laws in place before the product can be implemented would be a nominal obstacle. As government technology becomes more powerful, and its ability to invade (however lawfully) our privacy escalates, we must be comparably conscious of assuring it cannot be misused. I mean no offense to security checkpoint officers at airports, but there is no expectation for their morality only for their ability to do a job. The celebrity case points exactly to the issue of entrusting such invasive technology into the hands of someone who's trustworthiness has not been properly tested.
I suggest, therefore, that preemptive legislation is not quite enough. And, as much as I suggested the involvement of the legislative branch in the development stage of technology would be nominal; it should be avoided. Instead, the technologies themselves should adhere to a stricter expectation of human-immorality-infallibility. In other words, a new technology (especially one with the potential to violate the Bill of Rights) should not be deemed suitable for use unless it can be shown that all steps have been taken to reduce the necessary amount of human involvement. I am no techie, but it seems reasonable that with maybe a few more years of development the scanners could identify for themselves if there is something suspicious on the person's body and only then would the image produced be made visible to human eyes. This is not the case with baggage x-rays (but I think the case has be made that there is a large difference between body x-ray and baggage x-ray in terms of privacy). If these technologies are going to change the face of border security for decades, there's no reason we cant wait one year, or even one decade, to ensure that our assumption of technology’s autonomy is as accurate as possible.
Thursday, April 1, 2010
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.