Erratum and addendum ‐ GOOGLE'S duplex: Pretending to be human

AuthorDaniel E. O'Leary
Date01 April 2019
Published date01 April 2019
DOIhttp://doi.org/10.1002/isaf.1454
ERRATUM
Erratum and addendum GOOGLE'S duplex: Pretending to be
human
Daniel E. O'Leary
University of Southern California, USA
In GOOGLE'S Duplex: Pretending to be humanin Intelligent
Systems in Accounting, Finance and Management,(DOI: 10.1002/
isaf.1443) the published version of section 7 omits some quotation
marks, and misattributes some material to the wrong missing source.
That portion of section 7 should read:
There have been two primary efforts to craft an understanding of
the ethics associated with robots and artificial intelligentlike systems:
BS 8611 (BSI, 2016) and IEEE's Ethically Aligned Design(IEEE, 2016;
IEEE, 2017). Our primary attention will be focused on the IEEE's
efforts.
For IEEE, concern with artificial intelligence and impersonating
people first appears in 2017 (IEEE 2017, p. 258). As noted in that
document: A/IS (autonomous and intelligent systems)
“… may also deceive and harm humans by posing as
humans. With the increased ability of artificial systems
to meet the Turing test there is a significant risk that
unscrupulous operators will abuse the technology for
unethical commercial, or outright criminal, purposes.
The widespread manipulation of humans by A/IS and
loss of human free agency, autonomy, and other
aspects of human flourishing, is by definition a
reduction in human wellbeing. Without taking action to
prevent it, it is highly conceivable that A/IS will be used
to deceive humans by pretending to be another human
being in a plethora of situations or via multiple mediums.
Without laws preventing A/IS from simulating humans
for purposes like deception and coercion, and enforcing
A/IS to clearly identify as such, mistaken identity could
also reasonably be expected.
While individuals may enjoy the ability of A/IS to
simulate humans in situations where they are pure
entertainment, explicit permission and consent by users
in the use of these systems is recommended, and the
wellbeing impacts on users should be monitored,
researched, and considered by the A/IS community in
an effort to provide services and goods that improve
wellbeing. As part of this, it is important to include
multiple stakeholders, including minorities, the
marginalized, and those often without power or a voice.
IEEE (2017) appears to be one of the first documents to suggest that
computer system development, aimed at simulating, imitating or
impersonating people, using AI was not ethical or appropriate, and
that obtaining explicit permission and consent was recommended.
IEEE (2016, 2017) may predate Google's Duplex development and
apparent approach to test the system on people unaware that the
entity on the other end of the phone was a computer program.
Although it also is likely that there were parallel efforts going on at
the same time, at IEEE regarding ethics and at Google regarding
Duplex.
ORCID
Daniel E. O'Leary https://orcid.org/0000-0002-5240-9516
DOI: 10.1002/isaf.1454
106 © 2019 John Wiley & Sons, Ltd. Intell Sys Acc Fin Mgmt. 2019;26:106.wileyonlinelibrary.com/journal/isaf

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT