We are at an amazing stage in the development of artificial intelligence where excitement, anticipation, and speculation are driving important philosophical questions on how AI and Humanity will coexist in the future. Will we be able to implant mechanisms to control our machines or will they simply out think us and ultimately end up independently coexisting or possibly controlling us? Before we face this challenge, we have to address the risk that we can absorb the many benefits of AI and address the social changes they will bring.
Change Management Needed To Adapt Society To New Techologies
AI is not the immediate risk to Humanity’s existence, rather, our ability to adapt to what changes it brings could decide our fate. AI is presenting us with a worldwide Change Management challenge for the coming millennium that will require coordinated economic and cultural responses to avoid inequality and strife. Fortunately, adaptability being a hallmark of our species gives good reason to believe we will rise to the challenge.
In the race to develop AI technology it may not be possible to adequately conceive and implement lasting controls. Self regulation within this booming field is going to be secondary for the major players seeking competitive advantage. Secrecy and the monopoly of technical knowledge by the major players is already apparent. This is complicated by disinformation in the form of the touting of short term benefits, such as virtual assistants, and the domination of purportedly altruistic AI organizations by industry heavy weights, such as is the case with OpenAI that aims to promote and develop friendly AI. The tech firm strategy for dominating AI by buying up all the top technical talent is a drain on the academic and government institutions that may be best positioned to provide independent research that is openly published and subject to critical and ethical discussion.
Level The Playing Field
This present playing field provides much latitude for privately funded researchers who will be taking us forward on the AI evolutionary path. In the short term, the technology will be moving forward while our ability to address the concerns many philosophical discussions have identified is stalled and even impeded by the industry. The lack of parity between the two sides of this matter present a dangerous gap where the technological realities outpace our ability to control or even understand what is happening on a machine level and on a societal level. As a society, it seems prudent to maintain visibility of the magnitude of the gap and find ways to interpret it so we can anticipate and address interim social issues that will result. Given the enormous wealth that AI is expected to generate, directing investment from industry profits towards technical and social sciences to specifically deal with the repercussions of AI technology needs serious consideration.