This week, the Georgia Senate Study Committee on Artificial Intelligence (SR 476) convened to discuss the future of AI in Georgia. As AI technology becomes more advanced, and more industries adopt its uses, state legislators are seeking to advance policy that promotes growth and safety. The Committee’s discussion focused on how AI can help to propel Georgia’s continued prosperity as a business-friendly state, while also recognizing its real and potential risks.
Recent technological advances have naturally led to heightened political awareness of AI. Last year, 25 states introduced AI legislation, with 18 adopting resolutions or passing legislation. AI is becoming just as pervasive in policy discussions as it is in our everyday lives.
That said, it is worth more precisely exploring how we might use – and how we already use – artificial intelligence. It might surprise some who have followed AI’s recent developments and applications that that kind of technology has been with us, even to the point of becoming ubiquitous, for years now.
It is important to first differentiate two kinds of AI: generative artificial intelligence and artificial general intelligence (AGI). Familiar functions ranging from ChatGPT to the predictive text feature used on iOS devices use generative AI.
Generative AI surely mimics human intelligence in some ways, but its functions depend more on math and speed than, say, thoughtfulness or any sort of critical thinking. Bernard Marr, a contributor to Forbes, wrote last month that generative AI is more akin to “a highly skilled parrot” than the human mind. It does not understand the content it produces. It instead uses available datasets to predict words and patterns likely to come next.
Recently, those datasets have become more accurate to human experiences, and the rate at which AI can complete predictive analyses has gotten quicker. Artificial general intelligence is a step above this, to say the least.
Marr writes, “The concept of AGI is to mimic human cognitive abilities comprehensively, enabling machines to learn and execute a vast array of tasks, from driving cars to making medical diagnoses. Unlike anything in current technology, AGI would not only replicate human actions but also grasp the intricacies and contexts of those actions.”
It should be noted that AGI is still, for the most part, theoretical. While visions of Skynet wiping out the human race are unfounded, there is plenty of caution in the policy world about how to handle AI’s proliferation. Everything from concerns about plagiarism to fantastical science-fiction colors the discussion on the topic. Right now, most legislatures, including at the federal level, are still in exploratory stages on the topic.
However, there have been a few examples of more proactive measures. One of the most-discussed pieces of legislation from this year’s session was HB 986, a bill that would have outlawed the use of generative AI to interfere with elections and defraud Georgia voters. This bill stalled in the Senate after passing the House by a wide margin. The Georgia House also introduced HB 887, which, although the bill never received a vote, would have restricted AI in making certain decisions regarding healthcare and insurance coverage. One bill that did pass was HB 203, which was enacted last year. This allows healthcare providers to use an AI system to conduct eye assessments and prescribe contact lenses and glasses.
These bills demonstrate how AI can intersect with election integrity and healthcare, but its potential to cross paths with other policy areas is nearly unlimited. Education, agriculture, finance and just about any field that values innovation has seen AI technology implemented or at least theoretically proposed in some form.
Ultimately, the policy discussion around AI comes down to a spirit of optimism vs one of caution. Guidelines that promote ethics and transparency are a large part of legislating AI, but lawmakers should air on the side of tech optimism. For example, many are rightly concerned about how the advent of AI will impact the workforce, i.e., that workers will lose their jobs to a cheaper and more efficient AI. This is a reasonable concern, and it is likely to come to fruition, at least gradually.
However, the answer to offsetting job automation is not to jettison the principles of growth. Planning our future with AI necessitates that we continue, as we have for centuries, to encourage upskilling, AI training and thinking of AI as a tool to augment work, not a replacement for it. Human wants and needs dictate that there will always be work to do, as there always has been.
Finally, it is important to re-emphasize that AI as we currently know it is not so much “intelligence” in any form relevant to humans, rather it has more to do with math and speed. These are concepts that we don’t necessarily need to regulate. Regulation should exist as a policy tool in a narrow and specific form. We have far more to gain from AI than we have to lose, and policymakers should embrace the opportunity.