This week, Gary leads the class. He's going to provide some interesting insights into how these ideas work through AI and current cultural issues....
If you want to visit the links Gary provides, you can find them in the comment area of the last post (questions for 2/6/06)...when I transferred the material, the links didn't take. My bad.
Anyway, post on! These give us excellent food for thought for Monday night's class.
1) Considering how “generalized surveillance” has led to (our) “disciplinary society,” (365) what are some instances in which we can observe the effect of Panopticism – a “state of conscious and permanent visibility that assures the automatic functioning of power” (360) – at present? (Within the current context of the Patriot Act and wiretapping in the U.S., recall Foucault’s point about how the Panoptic discipline-mechanism operates separate from law (370) and the ostensible limits on power.)
--or--
Bentham imagined “a network of mechanisms” throughout society in place of the prison (364) -- what can we conclude about how we elect to use network technologies that are potentially (and often essentially) Panoptic, while others employ similar devices and methods for counter and inverse surveillance?
(See also the idea of sous-veillance)
2) Elliott mentions Kramer’s label of “cosmetic psychopharmacology” (376) regarding the recent trend in medication (e.g. Prozac). With the combination of the capitalist healthcare industry and our consumerist society, will ethical issues influence/affect our attitudes and uses of enhancement treatments/technology? Can ethics even play a significant role (in opposing enhancement tech.) when “cultural complicity” (374) and “authenticity” (377) are ideological critiques in the first place?
3) Here is a link to the ELIZA program that Joseph Wiezenbaum developed in the 1966, a primitive A.I. simulation of a Rogerian therapist. The Jabberwacky chatterbots named “George” and “Joan” yield much more interesting and more fluid conversations.
From reading both the Dreyfus brothers’ skeptical criticism and Kurzweil’s idealistic view, will A.I. remain limited even as “expert systems” (407-8), or will progress eventually result in A.I. having self-awareness and spirituality (393)? How do you predict A.I. will develop, and more importantly, how will it function in (or as part of) our society? Can we reasonably presume that the future will be neither utopic nor dystopic (a la The Matrix) regarding this issue?
4) Our relationships with Artificial Intelligence may be significant in future years (more like those depicted by the film A.I.), but presently we don’t regard the computer as a “subjective” agent who “does things to us” (423), for example. Despite publishing this article in 2002 (source), which ideas of Turkle’s seem outdated and/or irrelevant, not necessarily due to time but more so our current socio-techno trends and views?
(Some readers may even question/challenge Turkle’s ideas fundamentally, on the basis of her employing a theory (psychoanalysis) that seems obsolete for this context, after all. I gave her an honest read, but is there anything applicable to “computer culture” that we can salvage?)
5) Outside of his scientific context, how does Ihde’s idea of “technoconstruction” (485, plus Kaplan’s intro on 432) apply to our uses of technology (mainly personal computers) regarding writing, information, media, communication, community, society, etc. (i.e. within a social sciences/humanities context instead)? Could his term be a useful label for a digital/network paradigm or episteme (knowledge/way of thinking) in our present age?