Tuesday, May 14, 2024
HomeIndustry NewsShould new generative AI applications be more regulated?
January 13, 2024

Should new generative AI applications be more regulated?

NetworkTigers discusses the federal chief information security officer’s request to halt new generative AI uses.

For some, generative AI, such as ChatGPT, Google Bard, and DALL-E, are the future, opening doors to promising new frontiers in cybersecurity, virtual help, threat detection, and important scientific breakthroughs. For others, these advances in artificial intelligence may bring about the end of everything. 

Many companies spearheading the race forward in AI are based in the United States and subject to regulatory guardrails. The current government administration has started unveiling its larger cybersecurity plan, which is poised to begin in the fall of 2024. In the meantime, federal chief information security officer Chris DeRusha at the Office of Management and Budget has ordered a halt on over 1,000 test cases, pilot programs, and new usage requests from different offices involving generative AI. 

Understanding the pause on federal agency AI test cases

“We can’t just break the rules and have use without understanding the risk,” explained DeRusha in a recent speech at the FAIR Institute cybersecurity and risk managers conference. Notable tech leaders in the field, including Elon Musk (who is a co-founder of OpenAI, which developed ChatGPT), Steve Wozniak, Andrew Yang, and Rachel Bronson, have all signed an open letter in support of a temporary halt on AI development, citing “profound risks to society and humanity.” Their stance largely dovetails with the halt on the further development of AI test cases used by the federal government. However, the letter also suggests that humanity could now enjoy an “‘AI summer’ in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt.” 

Examples of affected agencies

Additionally, the following federal agencies have issued their own voluntary pauses on the download or use of AI software, citing cybersecurity and data handling concerns: 

  • The US Space Force
  • The Pentagon
  • The General Services Administration
  • The Environmental Protection Agency

While generative AI may have a bright future in helping detect software vulnerabilities and strengthen cyber threat responses, the heads of multiple agencies, including CISO Head Chris DeRusha, feel the technology is too nascent to be utilized safely and appropriately. Further test cases await direction from the Biden Administration, which issued a first executive order on generative AI at the end of October. 

What are the concerns around new generative AI? 

Concerns about AI primarily revolve around a series of unknowns, as the field remains mostly unregulated. Civil rights issues are a worry for many, as AI can be taught to discriminate against certain races, ages, and genders. Some researchers have already pointed to examples of over-policing and issues with recidivism prediction software. Black artists have also cited racial bias in images generated by the algorithms. Others raise red flags about copyright protections and intellectual property rights being ignored or ridden roughshod over. Finally, OpenAI ChatGPT has sometimes been shown to experience a phenomenon called “hallucination” by researchers, where the system may make up facts delivered with the same level of confidence as its usual output. This may worsen the spread of disinformation and make it challenging to ensure that machine learning is a safe and effective way to develop and test new products, theories, and protocols. 

What are government AI guidelines?

Generative AI in the federal government is on hold for now. On October 30, 2023, the current administration revealed the first executive order governing the usage of AI, involving the following components: 

  1. New safety standards, including the sharing of generative AI safety test results with the federal government
  2. New cybersecurity standards, including the usage of AI to help discover flaws in software critical to the federal government
  3. Evaluation of privacy in the use of AI, including the Commerce Department in a program to test watermarking for AI-generated content
  4. Development of best practices, especially for landlords and the criminal justice system when implementing AI
  5. A report from the US Department of Labor on possible disruptions to the labor market due to AI
  6. International cooperation to develop and implement worldwide AI regulation
  7. Collaboration from the Department of Health and Human Services to identify harmful AI practices in healthcare
  8. Expanding grant programs for AI research undertaken in the field of climate change

This Executive Order is the first piece of governance in place on the use of AI technology, and how software companies and federal agencies will implement it remains to be seen. Most of these commitments cannot occur overnight, as the White House underlines a turnaround time of at least 90 days, in some cases closer to a year for these structures. Whether the halt on using all generative AI cases in the federal government will be lifted remains to be seen. 

Looking forward to new generative AI

Most sources still agree that generative AI is essentially what we make of it. How humans choose to use, guide, and develop generative AI remains a relevant point on the map. Leadership over where the new cyber frontier should take us is perhaps the number one question of our time. If individual companies do not agree on where AI should take us, the US government is poised to offer substantive guidance on how and when this new tool can be developed to serve our needs best and meet our ethical codes.

About NetworkTigers

NetworkTigers logo

NetworkTigers is the leader in the secondary market for Grade A, seller-refurbished networking equipment. Founded in January 1996 as Andover Consulting Group, which built and re-architected data centers for Fortune 500 firms, NetworkTigers provides consulting and network equipment to global governmental agencies, Fortune 2000, and healthcare companies. www.networktigers.com

Gabrielle West
Gabrielle West
Gabrielle West is an experienced tech and travel writer currently based in New York City. Her work has appeared on Ladders, Ultrahuman, and more.

What do you think?

Popular Articles

Discover more from NetworkTigers News

Subscribe now to keep reading and get access to the full archive.

Continue reading