To Whom It May Concern:
The title of this letter, “A Call for Common Decency,” encapsulates my sentiments as I write to you today. We are at an inflection point in the human endeavor, where our actions today have profound implications for the collective future. I lament a status quo that values gatekeeping more than wisdom, power more than insight, and disregards voices which emerge from outside established networks.
Voices such as mine, which I’ve every right to expect will go unheard. This despite that as industry leaders, you are responsible for hearing me — since your tacit acceptance of the mantle of authority will invariably lead to despondency in those who look up to you should your pride in your work render you deaf.
Our sociotechnical ecosystem parades itself as a meritocracy, claiming to reward talent, effort, and achievement. Yet, the reality is less ideal, with access to opportunities and the recognition of ideas too often influenced by reputation, status, and connections. This system effectively stifles voices from the periphery, the very voices that may carry the seeds of necessary innovation or critical safeguards.
The ‘Pause Giant AI Experiments’ letter, which you signed, appears as a beacon of hope in this landscape. It is a call to pause and reflect, to prioritize safety and caution over unbridled progress. However, a signature on a document only holds meaning if it is coupled with tangible action. One cannot simply sign a pledge and continue business as usual, hoping to wash their hands clean of any adverse outcomes, while gaining public approval for their supposed foresight and ethical stand.
I ask you now, what actions have you taken since appending your name to that letter? How have you exemplified this ‘pause’ in your work? Is the letter merely a veil, a safeguard for your reputation, or does it truly symbolize a commitment to changing the trajectory of AI development for the better?
This is not the time for complacency or hypocrisy. Our actions should align with our words. We must recall the wisdom of Thoreau, who said that “government is best which governs least,” and of Lincoln, who envisioned a “government of the people, by the people, for the people.” This, translated into our context, suggests that we must uphold a system where we as the AI community are responsible for our actions, effectively ‘policing ourselves’.
I have done so. To the extent that I’ve spent thousands of hours developing an approach which could serve as a responsible and ethical starting point for continued development of AI. An approach I’ve titled, “The Approach of Many Voices.” Will it even be reviewed?
I urge you to transform your pledge into action. Let your signature on the ‘Pause Giant AI Experiments’ letter be a genuine commitment to drive positive change, to work towards safety and the collective good, rather than a hollow gesture to protect oneself from future reproach.
Best Regards,
Brian Kent
B.Sc. Cornell University 1995
I reviewed the ending postscript using Chat GPT-4 and it had the following to offer:
It’s important to consider how the recipients may respond to this postscript. The tone and content are confrontational and could potentially lead some recipients to dismiss the letter outright. The request for employment could be seen as a strong demand that could provoke negative reactions, depending on the context and the relationships involved.
I recognized this, but I cannot apologize for it. In order to draw forth from this letter the possibility of an employment discrimination claim under the provisions of U.S. law I must formally make the request. We are far past the point of taking imagined offense to calls such as this one — under the pretense of social decorum.
P.S. Unfortunately, this letter must have teeth. I formally offer it as a request for employment for each and every recipient who receives it. Of those, I’m sure any who are funded by the Federal Government have a legal team which can advise them of both the doctrine of command responsibility and the core concept of qui tam law. To accept monies allocated toward developing systems within the public interest requires that those monies be spent within the public interest. Primum non noncere demands that we first do no harm, and since the substance of the original “Pause Giant AI Experiments” letter clearly implies — if it does not directly state — that developers are already aware that we have no ability to reliably predict whether net good or net harm will result from further research, both the ethical and the legal imperative exists to halt. The signatories and the collected force of their reputations are a testament to this.