The decision by OpenAI’s board of directors to sacrifice OpenAI’s founder and chief executive officer Sam Altman as CEO only to reinstate him four days later, speaks volumes about how the people that govern the rapidly evolving field of generative AI prioritise commercial success over humanitarian morals. The reasoning of Altman’s firing was reported to be that Altman “was not consistently candid in his communications” and that the board “no longer had confidence in his ability to continue leading OpenAI”. Following Altman’s ousting, hundreds of OpenAI employees, including co-founder and board member Ilya Sutskever, signed a letter demanding that remaining board members resign or they would leave.
Read More: AI Start-Up Rakes In $141 Million In Investment For AI Funding
According to The Economist, there was speculation that Altman was moving too quickly to expand OpenAI’s commercial offerings without adequately considering the safety implications. However, Altman has previously expressed concern for OpenAI’s safety stating “I think if this technology goes wrong, it can go quite wrong and we want to be vocal about that. We want to work with the goverment to prevent that from happening and we try to be very clear-eyed about what the down-side case is and the work we have to do to mitigate that”.
The Silicon Valley Shake-Up
The initial firing of Altman represents a larger shake-up in Silicon Valley between two camps of people. The “Doomers” which prefer open-sourced software and calls on AI regulation as opposed the “Boomers” who are comprised of technology optimists and opt for closed-sourced or propriety software. The afrementioned AI regulations are based on safety but would also make it harder for new companies to break into the industry and therefore protect the investments of “Doomers”. Sam Altman who was (and is) the human face of generative AI particularly since the release of ChatGPT in 2022, was able to steer the line between these two camps of people.
Read More: Opinion: Lil Miquela & The Fall of the AI influencer
Win-Win Scenario at Microsoft
Here is where Microsoft comes in. After Sam Altman’s initial firing, the softwear visionary was initially welcomed into microsoft along with the employees who had threatened to leave following his ousting. At a valuation of USD 29 Billion, Microsoft who owns 49 percent of OpenAI has invested more than USD 10 billion into OpenAI since 2019, so had Altman joined Microsoft it was initially suspected he would, Microsoft would have retained both their licencing rights with OpenAI alongside their relationship with Altman and a slew of reseachers and engineers to lead a new advanced AI research team. The Financial Times reports that Microsoft is still hoping for governance changes that could give it a say in how the company is run.
The Future of Governance
The intial conceptualisation of Open AI was to create a non-profit artificial intelligence set-up that would design open source software and attempt to counter Google’s growing dominance in the field. Moving forward we can expect to see not only new releases but the pace at which AI technology is released from OpenAI. Investors and goverments will be looking into the governance mechanisms set in place by these firms due to their rapid growth in the last few years alone. According to the Financial Times, analysts have suggested that OpenAI will be hit by the week’s events, with rival groups Google and Amazon representing strong and stable challengers in the race to offer generative artificial intelligence services to businesses and consumers. It will be up to Altman to monitor the complex governance arrangements in which a not-for-profit board oversees a for-profit company and clear the path for a simpler corporate structure moving forward.
What About The Rest of Us?
The right to AI’s control and goverance will arguably have the biggest impact on the public. For one instance, the use of AI-influenced autonomous weapons. Take the ongoing war between Russia and Ukraine or Israel and Palestine as an example. Autonomous weapons are pre-programmed to a specific target. AI searches for that “target profile” using sensor data like facial recognition. When the weapon encounters someone (or something) the algorithm perceives to match its target profile, it fires and kills. This has the potential to become a global security threat as it is not monitored by human judgement and elimiates the accountability of those who yeild the autonomous weapons.
Outside of war, we are only starting to see regulation regarding the use of deepfakes and the misinformation and malicious content that comes alongside it. Financial penalties are still a far cry from preventing its potential manipulate public opinion alongside more nuainced issues like financial fraud and revenge porn. However, there is yet to be a federal criminal law or legislation against deepfakes in the United States.
Perhaps the select few in control should go back to the board’s purpose of creating AI that “benefits all of humanity” over its desire for profit and the battle for AI dominance.
For more on the latest in business and tech reads, click here.