burgerlogo

What Trump’s AI Executive Order Means for the Industry

What Trump’s AI Executive Order Means for the Industry

avatar
Guest Writer

- Last Updated: December 2, 2024

avatar

Guest Writer

- Last Updated: December 2, 2024

featured imagefeatured imagefeatured image

This past Monday, President Trump signed an executive order titled The American Artificial Intelligence Initiative (AAII). It outlines five initiatives to help the U.S. promote growth and maintain leadership in AI. Below are each of them, along with a quote I found to most concisely summarize the section.

  1. Investing in AI Research and Development (R&D):
    “Directing Federal agencies to prioritize AI investments in their R&D missions.”
  2. Unleashing AI Resources:
    “The initiative directs agencies to make Federal data, models and computing resources more available.”
  3. Setting AI Governance Standards:
    “This initiative also calls for the National Institute of Standards and Technology (NIST) to lead the development of appropriate technical standards for reliable, robust, trustworthy, secure, portable, and interoperable AI systems.”
  4. Building the AI Workforce:
    “Prioritize fellowship and training programs to help American workers gain AI-relevant skills.”
  5. International Engagement and Protecting our AI Advantage:
    “Committed to promoting an international environment that supports AI R&D.”

I’d first like to discuss which sections stood out to me and then dive into more of a general discussion on why this executive order was even created.

Areas That Stood Out to Me

Initiative No. 2: Unleashing AI Resources

I like how this section is worded because, for someone to build an AI solution, you have to feed it resources (e.g. data). Those resources come in the form of data, compute time, and existing models; these are each called out in the section.

First, in terms of data and models, we’ve already seen some government agencies open their vaults to the public. I still remember two years ago when NASA announced all their funded research would be open to the public, giving anyone access to their data, analysis, and findings. Of course, I didn’t have to scroll far to find someone in the comments stating the headline should have read, “All publicly-funded research is open.” While that may still be a ways off, today you can go to GitHub and find Jupyter Notebooks (think of it as a whitepaper plus code) from NASA as well as national labs and other government agencies. Local governments have also become great sources of data. Cities like New York and Chicago make it incredibly easy to access data on anything from potholes to fires to rodent sightings. If the federal government could promote more of this kind of openness, we would see some interesting developments. You could imagine municipalities hosting Kaggle competitions where people compete to make the most optimized traffic light system for their city.

Compute time is an interesting one. While we won’t see the federal government getting into the business of cloud computing (unless that rivalry between Trump and Bezos really heats up), what supercomputers they do own will now prioritize AI projects. It’s also possible that we could see federal grants and discounts for purchasing compute time. Many companies (Google, Amazon and Microsoft) already offer some amount of compute time free of charge to students and first-time users. There’s also the argument that compute time isn’t much of a barrier for AI projects. Most AI work can easily be done on a budget laptop, but there are cases in which you need more power.

Initiative No. 3: Setting AI Governance Standards

This initiative calls for the National Institute of Standards and Technology (NIST) to set standards in a number of areas. Just like the name suggests, setting standards is what the NIST does. However, what it doesn't do is regulate or enforce those standards, which gives their standards the enforcement power of a suggestion or a recommendation. However, I do like the idea. It makes sense that pubic models should have standards that are known by the public before the models are deployed in the public. Bridges have to meet standards before cars are allowed to drive on them. Is it only fair that an automatic toll booth should have set standards it has to pass before cars are allowed to drive through it?

The NIST is being asked to develop standards for interesting characteristics: reliable, robust, trustworthy, secure, portable and interoperable. Having "trustworthy" in the list not only makes it sound like the Scout Law, but turns it somewhat into a philosophical discussion. What is trustworthy? Is that the same as ethical? Does the definition of trustworthy change over time, or differ from place to place? This can quickly get into a discussion on the ethical application of AI, which needs to happen, but I’m not going to get into it here.

Why Are We Getting This Executive Order?

A bigger question is, what prompted this executive order? The answer can be summed up in a simple equation: AI + China = Fear. In 1957, Sputnik created a storm of fear over the possibility that the U.S.S.R. would dominate the new arena of space. At the time, we couldn’t even guess how we would go on to use space technology. Over half a century later, that technology enables us to do everything from getting driving directions to having the right time on our phones. Fast forward to today: AI is the arena and China is the superpower that’s hoping to capitalize on it. While AI is still in its infancy, we can’t predict how this new technology will be applied 50 years from now. We do understand it's a race, and with that comes the fear of losing.

Do We Need to Worry About Falling Behind in AI?

My answer is no and yes. Let me explain by answering the following questions.

Q: Will we fall behind in developing the underlying AI algorithms that let us build powerful models? 

A: No, because no one is behind in this area. AI has been incredibly open. Anyone with an internet connection can access the latest AI tools and start creating their own AI solutions. In 2015, Google announced TensorFlow, the most powerful deep-learning package. Not only did they make it free to download, but the repo was also written in an open source language and hosted on GitHub. Anyone could take a peek under the hood to see exactly how it runs. This openness has greatly benefited TensorFlow. In GitHub’s latest year in review, they listed TensorFlow as one of the most significant code bases in terms of the number of contributors, many of whom aren't Google employees.

The growth and acceptance of open source communities has made it such that the best tools are available to anyone. Go back ten years, and that wasn’t the case. You had companies like SAS and STATA whose pricing made it so only large corporations and research institutes could afford the technology. Today, those companies are seeing a large loss of market share to the open source market, and there’s no sign of things going back. So, I don’t think we’ll lag behind in terms of underlining AI technology.

Q: Can we fall behind in developing creative solutions that leverage AI? 

A: Yes. If we don’t encourage using this tech in creative ways, then we will miss out on being the ones to own that IP. For the government, that means missing out on the creation of new jobs and tax revenue. Fortunately, we’re doing pretty well right now. Anecdotally, you can’t throw a stone without hitting a company that says it’s using AI (how much of that is real versus hype is up for debate). Looking at the actual numbers, we can see the U.S. has the most AI talent and the most AI companies (about 14 percent of the world's talent compared to 9 percent from China). However, this will most likely change because China is out-funding the U.S. Last year, they made up 48 percent of the global funding for AI, whereas the U.S. was only 38 percent.

This isn’t happening by chance. In the last four years, China has made several concrete plans to push its country’s economy. In 2015, China announced the Made in China plan; over the next decade, China will advance to be more technology-focused. In 2017, they doubled down on AI by coming out with the Next Generation Artificial Intelligence Plan, with the goal of being the world leader in AI by 2030. This was a dense 29-page document with stages, milestones and goals.

Going back to Trump's executive order, we see some pretty vague language and no set amount of funding declared for AI research. However, it does call for further planning to be presented in 180 days. Maybe this is par for the course when it comes to an executive order, I don’t know; my degree is in data science, not political science. However, it feels like China has a business plan on which they’re already executing, whereas the U.S. has only put together a mission statement.

In 1961, Kennedy announced before congress that we would go to the moon. This was four years after Sputnik. In May, it will be four years since China announced Made in China. I know comparing a plan with an orbiting satellite isn’t the same, and the nebulous nature of AI makes it a little more difficult to set goals. With the space program, we had a clear objective that anyone could literally see in the night sky. AI is more amorphous. Part of the challenge for governments is going to be creating a plan and goals the public can understand as well as support spending on.

Written by Matt Yancey, Principal Data Scientist and Machine Learning Engineer at ClearObject.

Need Help Identifying the Right IoT Solution?

Our team of experts will help you find the perfect solution for your needs!

Get Help