International Journal of Scientific & Engineering Research, Volume 4, Issue 4, April-2013 1421

ISSN 2229-5518

Editorial for the Special Section on Ethics and Affective Computing

Abstract:

Kuljit Singh

------------------------------------------------

The sudden escalation in informational and computational

technologies is quickly making things possible that were impossible just a few years ago.As these new possibilities become realities,very real ethical dilemmas arise which are challenging the very foundations of ethics,traditionally conceived. One need only consider the 3D printers that are about to hit the market and that will allow individuals to print working firearms at will. Such a possibility will, no doubt, have policy makers wondering how to handle the situation in the absence of existing laws to cover such an inevitability. Introduction:

Challenges are mounting on other fronts as well, issues with predator drones and autonomous weaponry being among them. Such issues may well make the topic of this issue seem trivial. It is not. For instance, one of the ethical issues attached to affective computing reaches to the foundations of ethics by challenging our common sense belief that truth-telling is a value and that deception is simply wrong, at least in most contexts. In brief, the problem can be stated this way: If robots are to be widely adopted in society, they need to be like us. Thus, giving them simulated emotions seems essential. For instance, when it comes to the use of robotic pets in eldercare, lifeless, unaffective robots would be poorly suited to the task for which they are designed. At the same time, to give such robotic pets the ability to act in such a way as to make us feel good seems to be simply deceptive. If deception is wrong simpliciter, then so are simulated emotions; but if the use of simulated emotions is wrong, then implementing the affective qualities needed to make some machines able to function as needed would also seem wrong. Something is either amiss with our common understanding of the ethics of deception, or research in affective computing, which often amounts to designing machines precisely in order to deceive us, is misguided. The situation is not limited to such innocuous creatures as mere pets either, though when we realize that a robotic pet may simultaneously be a weaponor a spy, the issues start to compound.

In the first paper of this section,“Are Emotional Robots
Deceptive?”, Mark Coeckelbergh hits the central issue just mentioned head on. Taking a common sense approach, Coeckelbergh notes that robots must be suitably designed to respond appropriately in such a way that humans.

-----------------------------------------------

Kuljit Singh is with the Department of Information

Technology and Management .

The Punjab Technical University,focuses on International

Conferences and Journal Papers.

E-mail: kuljitsingh333@gmail.com
For information on obtaining reprints of this article, please send e-mail to: kuljitsingh333@gmail.com

understand what is genuinely being communicated in order order to facilitate open cross-entity communication. However,this must be done carefully in such a way that humans do not dismiss robot communication with what he calls a “deception response.”

In “Red-Pill Robots Only, Please,” Bringsjord and Clark challenge approaches like Coeckelbergh’s. Playing off the Matrix of movie fame,blue-pill robots are engineered to deceive, and embracing them will lead to a cascade of moral issues by pushing our society further away from values associated with truth toward those associated with pleasure. Our love for “ digital illusions ” is consonant with their argument and may indicate that there isalready cause for concern, even prior to the prevalence of affective,blue-pill machines.

Sullins keeps us on the pleasure track with “Robots, Love and Sex: The Ethics of Building a Love Machine.” Admittedly, something always sounds a little goofy and unimportant, if not slightly embarrassing, when raising the topic of sex robots, though few have any doubt that they will be among us in record numbers. Sullins invites us to take the issue seriously by putting forth the notion of “erotic wisdom,” while simultaneously arguing that we must lay down some constraints when it comes to designing machines that can manipulate human psychology at sucha deep level.

Steering a sensible course between the issues, Cowie argues
in “The Good Our Field Can Hope to Do, the Harm It Should Avoid” that, while most affective applications are morally neutral, simulated affects might well amount to a kind of deception. However, the situation is not as simple as good as good versus bad since there are several moral positives that can come from research in this area. This paper enumerates some of the moral positives and negatives that pertain here to underscore the balancing act that researchers must undergo when approaching the design of affective machinery.
In “The Affect Dilemma for Artificial Agents: Should We

Develop Affective Artificial Agents,” Scheutz takes a little bit of a different angle, noting that robots without affects and affective sensibilities may well cause more harm than those with them, but this also transforms them into patients for our moral regard. In this paper, Scheutz argues that we must nonetheless build them offering five reasons to do so before closing with a brief enumeration of the challenges ahead.

Finally, Guarini offers a critique of my own work in ethical
theory with his paper “Conative Dimensions ofMachine Ethics: A Defense of Duty.” I have argued elsewhere that ethics, traditionally conceived, hangs on a fundamental contest between our affective desires and our sense of obligation; as

IJSER © 2013 http://www.ijser.org

International Journal of Scientific & Engineering Research, Volume 4, Issue 4, April-2013 1422

ISSN 2229-5518

such, ethics, traditionally conceived, is outmoded and ill-suited to solve problems arising fromand within autonomous systems. (See Guarini’s paper forreferences.) Guarini counters with a defense of deontology, noting that the conflict between affect and obligation that motivates Kantian ethics might be reworked along the lines of obligation-obligation conflict that can preserve a notionof duty applicable to machines.

Together the papers make a nice set, and I am pleased with

the way they (unintentionally) build off of each other. Nonetheless, my hope here is that the reader will walk away from this volume with more questions than answers.Indeed, it is the job of the ethicist to complexify first in an effort to lay out the nuances of an issue before arriving at a conclusion. These are early days for the field, and answers at this point would be premature; but given the speed with which the field is developing, opening up the questions isessential.

I would like to thank the several referees who assisted with evaluating the papers included herein and their authors, who received this criticism with grace and dignity.I would also like to thank IEEE Computer Society for the opportunity to compile this volume.
Kuljit Singh

Author

Kuljit Singh is a businessman and a student of IT field ,managing director of his own firm, and also the member of some areas at the Present world. His primary research is in the philosophy of information, including the challenges that informational and and computational technologies are posing for macroethics. He has published several papers on and around these themes, alongwith editing several special journal issues and also published Conference papers. He got many” Prestigious Awards”in managing his own firm as an “Managing Director” and also currently earns a handsome income in the competitive market.

IJSER © 2013 http://www.ijser.org

International Journal of Scientific & Engineering Research, Volume 4, Issue 4, April-2013 1423

ISSN 2229-5518

References

[1] A. Baliga, P. Kamat, and L. Iftode, “Lurking in the Shadows: Identifying Systemic Threats to Kernel Data,” Proc. IEEE Symp.Security and Privacy, pp.

246-251, 2007.

[2] B. Blackburn and R. Ranger, Barbara

Blackburn, the World’s Fastest Typist. 1999.

[3] M. Christodorescu, S. Jha, and C. Kruegel, “Mining Specifications of Malicious Behavior,” Proc. Sixth Joint Meeting of the European Software Eng. Conf. and the ACM SIGSOFT Symp. the Foundations of Software Eng. (ESECFSE

’07), pp. 5-14, 2007.

[4] W. Cui, R.H. Katz, and W. tian Tan, “ Design and Implementation of an Extrusion-Based Break-in etector for Personal Computers,” Proc. 21st Ann. IEEE Computer Security Applications Conf. (ACSAC ’05), pp. 361-370, 2005.

[5] D.E. Denning, “ A Lattice Model of Secure

Information Flow,” Comm. ACM, vol. 19, pp. 236-

243,May 1976.

[6] D.E. Denning and P.J. Denning, “Certification of Programs for Secure Information Flow,” Comm. ACM, vol. 20, pp. 504-513, July 1977.

IJSER © 2013 http://www.ijser.org