AI is the New Rocket Science

Photo by SpaceX on Unsplash.

“It’s not like it’s rocket science!” Exclaims a character in your favorite sitcom/drama/book, who then explains to another character how simple something they are asking is. Rocket science has been widely used in the everyday English dialog to mean something incomprehensibly complex more often in the sarcastic negative meaning, much like how “heavenly script” is used in Chinese (and by transitivity, the one of the most incomprehensible languages in the world).

On the surface (pun intended), the appeal of rockets and space technology is so easy to grasp for the general public, and has since been sensationalized and romanticized by numerous artists in any shape and form imaginable, from movies (Oscar-wining ones too), songs (including this oldie-goodie), to endless books, to just name a few categories. Even without extensive educational efforts from pop science and space agencies, it is not hard to see the appeal of space and space technology. After all, every launch marks the lift off of humongous space vehicles weight a massive ball of fire, into the eternal void full of undiscovered wonders of our physical universe. The nerve-racking challenges in weightlessness, the cool but memorable jargons (“extravehicular activity” sounds unfamiliarly cool with an easy acronym, while “natural language processing” sounds like a meat plant that turns books into dictionaries1), the smart-looking lead engineers and scientists in the huge mission control rooms, and the (oddly never soundless but) expansive black backdrop of the universe all contribute to the mystery and gravitas. Add black holes and special relativity (and the Nolan brothers), and boom, instant Oscars.

On the other hand, the portrayal of AI (really mostly just robots) has largely fallen into two tropes: the bad AI deceiving humans and leading to our extinction, and the good, friendly, quirky, and often under-equipped AIs discovering their humanness and/or fighting these bad AIs along our side. Sometimes a bit of both. (AI) programmers on the silver screen are either staring at walls of JavaScript scrolling by when they smash their laptop keyboard randomly (which, can someone who knows someone in Hollywood talk them into at least replacing with Python?), or are token nerds that gets killed off after offering the one critical smart comment they offer the muscular super-spy protagonist (apparently I’m not the only one wondering).

Despite their vastly different public images, rocket science and AI science actually share a lot in common. Some are more obvious, so let us get those out of the way first.

First, both are extremely capital-intensive and knowledge-intensive to build. While the proliferation of AI today might give a different impression, the AI market largely remains an oligarchy, where a few well-funded top players dominates most of the leading-edge work. Even the David’s in the news are Goliaths compared to most of the rest of the world in terms of access to computational resources and/or top research talent, especially if they are building much of the breakthrough from scratch.

Second, both are applicable in a myriad of dual-use situations, which invites heavy government scrutiny if not also investment, media hype in the name of national security and pride, as well as (sometimes misguided) public interest. AI science is more and more resembling its rocket relative at its high time 70 years ago, where DeepSeek is actively compared to the “Sputnik moment” by the media, no less.2

Third, both are overly romanticized by the general public. This is manifested mainly in two ways – one where fiction works make interesting extrapolations of the technology into the future while grossly missing the mark on some things that we see as obvious decades later (e.g., the phone prediction below from 1956); another where we tend to amplify the voices of a few people that are associated with the technical breakthroughs, who are usually not the most versed in the technologies they represent. One example is early astronauts. While there is no doubt about their bravery to sit in their tin cans (which is what they were by today’s standards), they are highly trained to follow a very dedicated mission protocol designed and maintained by numerous engineers to account for all foreseeable issues during the mission, with a booster, a vehicle, electronics, etc. designed and built by hundreds if not thousands of highly capable and deeply knowledgeable scientists and engineers. Yet we end up mostly remembering the “One small step” quote — pretty much the same thing is also happening today.

Prediction of future phones in 1956. From Reddit. Original source unknown.

In the meantime, there are some less-discussed commonalities between rocket science and today’s AI science, as well, and you might not be expecting some of these if you haven’t paid close enough attention.

First, “science” is really a misnomer for both. To be clear, this is not to say that no science is happening within these disciplines: to the contrary, when there is abundant public interest, funding and potential applications to practical problems, scientific research and discoveries proliferate. But the term “science” clouds so much of the picture and creates an illusion that everything happens where people wear white lab coats, stare intensely at formulae and hand-drawn orbit diagrams on blackboards, or just debate the latest scientific findings in presentations in a crowded classroom.

So much that is happening in both rocket science and AI science is not science, per se, in at least two ways. First, a lot of what we marvel as scientific achievements are really a combination of scientific and technological breakthroughs, and at the time that they become a public sensation to deliver an astonishing demonstration or tremendous economical value, what is known and well-understood from a scientific standpoint is often very primitive. Less often are cases where a piece of scientific theory published decades prior explaining perfectly all the new empirical data we collect in experiments, more likely we engineer things to work first with a ton of new empirical data, then patch up the theory here and there to explain what happened. It is worth noting that even in top academic labs, oftentimes the line between very cutting-edge scientific advances and very sophisticated engineering is blurred at best, since these disciplines can deliver so much practical value before everything is perfectly well-understood about them, as long as the good recipes can yield reproducible results and be built upon.

Second, using just the term “science” underplays the amount of stellar systems engineering that goes into both of these disciplines. A great deal of program management, business administration, architecture and engineering marvels go into these advances, which would have been impossible were it left to just a handful of smart scientists alone. This is thanks to the complex and multifaceted nature of these problems spanning deeply into so many academic disciplines and application domains, where it is impossible for any single human to be sophisticated enough to know everything well enough to make meaningful contributions to all of them – and even if someone did, they would simply not have enough time and energy to do all the work that is required. Organizing a team of individuals, each talented in their own ways, to move in unison towards a grander goal while keeping the quality of an entire delivered system up a certain standard is a lot more difficult than it might sound – to get a glimpse of this challenge, one needs to look no further than group projects in their nearest school / university.

Second, we tend to take too narrow of a view of their potential (positive) implications. Both rocket science and AI science are complex enough to involve the collaboration of people from many different disciplines coming together, and this effect sometimes pays off in ways that are difficult to imagine when the technology is being developed to solve the problem at hand, even for the very experts themselves that invented the technology. What do a LASIK surgeon, a newborn baby drinking formula milk, and an architect designing a new skyscraper have in common? They are likely using some technology that originated from NASA’s space exploration. Numerous such examples can be found also in life sciences, material science, healthcare, just to name a few. While not all of these are directly related to the problems the technology was originally designed to solve, this is a great illustration of the far- and wide-reaching positive technological implications of solving complex engineering problems.

In AI, this effect is likely going to be more pronounced. While most of the technological advances that enabled space flight for humans are helping to improve the human condition that could potential leverage these technologies on Earth, space flight itself is not a “product” that directly drives a positive impact on most people’s everyday life (except for the convenience that satellites provide, of course, which should not be underplayed and deserves a whole other article — here’s an example). This picture is vastly different for the development of AI. More so than ever before, the general public has direct access to some of the cutting-edge AI artifacts, and have knitted them deeply into the fabrics of their everyday work and life. At the same time, not only does AI require technological advances to build and use efficiently (this is what seemingly disjoint technologies like crowd-sourcing, light-based chips, and massive yet efficient storage clusters have in common), but strong AI artifacts can also be leveraged to mimic the human experts out there to potentially uplift the productivity and innovation in many other disciplines (AI for science and AI for AI), catalyzing more emergent technological leaps — meanwhile the V-2 rockets you build would not automatically help you build a Saturn V.

Third, both are sometimes overly mystified by (some of) their practitioners. With great power, comes sometimes not great responsibilities, but great insecurities. Both rocket science and AI science can bring about significant societal changes, economic opportunities, and power shifts with their dual use nature. With vested (geo-)political, economical, financial, and sometimes reputational / socioeconomic status interests in maintaining the scarcity of the technology developed, the nations, organizations, or individual practitioners that work on both rocket science and AI science are sometimes incentivized to unnecessarily complicate these technologies in the public narrative, and put up a mystified and grandiose facade. With extensive gilding and gatekeeping, they try to ensure that they are part of the few that is “in the know”, and shun away anyone anyone that comes near and offers misdirections when they get close.

While from a short-term perspective this might be beneficial for the individuals and groups involved, in the long term, this is usually a net loss for society and humanity as a whole. Science and technology did not win over the world by shipping around mystic elites that speak a language that they and only they can understand, but by developing a rigorous set of methodology that derives theories from a relentless belief in empiricism, where embracing a broader range ideas that matches all of the experimental findings as we know them have lead to numerous rediscoveries of the empirical truths about our world many times over in history.

So, as we recognize these commonalities between AI and rocket sciences, I believe there are also a number of lessons we can inherit from our space-exploring precursors back in the day:

Strong leadership and clear vision are key. Because of the capital-intensive and high-risk nature of these endeavors, it is easy to shy away from the challenge, to put in a half-hearted effort, or divide the exploration into too many disjointed directions instead of focusing on solving the key challenges — all quick recipes to wastage of a lot of resources and eventual failure.

It is a Herculean organizational challenge to united a large team of people around something as complex and as challenging as rockets and modern AI. This is partly because, when at scale, people will have to come from different disciplines with different backgrounds. They will not only have very different views on what is obvious, what is comprehensible, what is important in the entire project, and what is at risk of failure, but also have very different views from whoever is leading the team, because every person is limited in their own domain(s) of expertise. Meanwhile, although most would share the same mental mission of driving great achievements from the success of the overall project, each individual also comes with their personal agenda and goals. This is where vision-driven clear directions are key for the group to operate more autonomously, yet still moving towards the same collective goal in unison. Most individuals with strong technical backgrounds might have at one point wondered why someone not as technically brilliant or versed as them is doing this thing called “management” while they are doing the “real work”, and wonder if they themselves can instantly do a much better job with their deeper expertise in the subject matter — the truth is, these are usually two very different job categories with very different set of skills, and it takes serious time and effort to constantly do well in just one of them, let alone both. Working in a team of 5 is usually not that difficult, but imagine 5x that, then 5x that, and 5x again. What you would end up using is a very different set of skills than those you started out with.

Set moonshot goals. “We choose to go to the Moon in this decade and do the other things, not because they are easy, but because they are hard.” This famous quote from U.S. President John F. Kennedy’s (JFK’s) 1962 actually emphasizes two key ingredients that I believe make moonshot goals work: a challenging goal that goes a few steps beyond the average person’s ambitions, and a somewhat tangible time frame allotted to achieve it.

The goal-setting part is more intuitive and commonly talked about. A challenging yet meaningful goal can help attract the right kind of talent and keep them motivated working on it, which can typically lead to more exciting advances than what is imaginable from the onset. Like many things in science and engineering, solving problems at a larger scale, making things work at faster speeds, and increasing system reliability in more challenging conditions all require deep investigations and sometimes multiple breakthroughs. Once we know how to do these things for the first time, we have not only obtained a good solutions for smaller scales, slower speeds, and lower reliability requirements along the way, but also have built a world-class team for future challenges of this sort.

Often less talked about is the second part about time frame. JFK did not say “We choose to go to the Moon in this year and do the other things” probably because it was mid-September already (with the Holiday season coming up, there was practically no time left in the year), neither did he say “We choose to go to Mars in this decade and do the other things” presumably because it was too Red for the American people’s taste at the time. Jokes aside, these would have been time frames incompatible with the technological leaps needed at the time, which I would sometimes call “physically impossible” as it is almost breaking the laws of physics and causality. The risk with setting these goals is that people either would not have trust in leaders that set such goals, or would quickly lose trust when they realize that the timeline is entirely unrealistic after putting in a good amount of hard work and getting nowhere close.

Respect subject matter expertise with humility. Another effect of working on a complex problem / system, is that no one will constantly have a perfect understanding simultaneously at the bird’s eye level and down into the details that can sometimes make or break the entire system. As the problem we solve becomes more and more complicated, it is through simplifications that we make abstractions, reduce the complexity of the problem, identify patterns, and make decisions that involve larger and larger amounts of humanpower and resources. It is necessary to make assumptions and develop autonomous mechanisms for a large, complex organization to function, but it is also too easy for those that are leading / decision-making positions of these organizations to forget that the people who are actually working first-hand on the subject matter are more likely the experts on the problems.

For a large group of people to make progress on a sophisticated problem, it is important for those who lead to discern important signal from noise from the perspective of the project as a whole, which means not everyone’s voice will be heard at every given point in time. In the meantime, for the organization to stay healthy and the risk in the project to remain reasonably contained, it is also crucial that organizations have a good way of surfacing mission-critical concerns and considerations that often come from front-line contributors, and be decisive enough to take actions accordingly, no matter how trivial they might seem to an outsider. On this point, I am constantly reminded of the O-Ring issue that eventually lead to the disaster of Space Shuttle Challenger. In a project as intricate as the space shuttle program, layers of management would be inevitable, where people assigned to management roles are usually either more senior or more removed from the day-to-day, if not both. Their failing to trust the expertise and professional judgement of the technicians that actually worked on the O-Ring led to this information being buried in the giant organization, which eventually became the weakest link that decidedly caused the huge investment to fail catastrophically, with loss of precious human lives.

Collaborating with openness we all win, competing with divisiveness we all lose. In the same 1962 speech at Rice University, JFK retold the ancient philosophical wisdom eloquently in modern English, “The greater our knowledge increases, the greater our ignorance unfolds.” This highlights the seemingly paradoxical nature of knowledge exploration — as we learn and expand the volume of the body of our knowledge, the surface area where this body meets the vast vacuum of ignorance inevitably grows with it. With every question answered, two more seem to pop up on that answer and point into the further darkness of unknowns.

In the meantime, as Sir Issac Newton famously put it, “If I have seen further it is by standing on the shoulders of Giants.” This cannot be more true in today’s rapidly developing world of artificial intelligence and information technology as a whole. Information technology has provided us with the infrastructure to not only replicate factual knowledge that others have produced or accumulated, but now also the operational knowledge about how to do things. Instead of reinventing the wheel, in the digital world, one can literally make “infinite” copies of that exact same wheel wherever they want and use, as long as it is permitted by the licensing of that wheel. We are constantly witnessing great breakthroughs at the scale of the entire humanity, rather than just one or two groups of really brilliant people, whose work is in turn built on countless digital wheels built by generations of their predecessors. It is with a largely open-source culture that this is possible, which has in turn often benefited the very people that decided to share their work publicly in the first place, because they are no longer obliged with having all the best ideas about how to improve their work — the community will do much of the work with them. From personal experience, five years and a few thousand Github stars later, Stanza has grown into something that is much more robust, functionally rich, and well-maintained project thanks to the numerous contributors not only to this project (especially kudos to John Bauer!), but also to Universal Dependencies, various multilingual NLP datasets, and the various tools that Stanza depends on — none of this would have happened were we in a closed-source alternate universe.

But, you might ask, this post has talked about so many parallels between rocket science and AI science — while it is good to learn from history and everything — is there anything unique to AI science, that does not have a good parallel?

One of the very defining characteristics that sets it apart from most natural sciences, in my opinion, is that its goals / references are so elusive. In physical sciences, the goal is to propose theories and formulations that can well model what the universe does and make reasonable predictions about them, the object of study is permanent. That is, the universe does not move or suddenly change its rules of engagement as we discover more. On the opposite, the challenges have typically been that we either oversimplified in our modeling and found new evidence from new experiments that contradict and therefore help refine them, or that the scale or complexity of practically meaningful simulations required too much resources, and we keep inventing better and better ways to approximate what the universe does. The universe does not care and does not play games.

Intelligence, on the other hand, has been a constant elusive dream since humans have started philosophizing about intelligence. In each era, we set a seemingly impossible goal for the technology at the time as the “crown jewel” or “holy grail” of AI, only to quickly dismiss it after it has been achieved by saying “that was not what intelligence TRULY is anyway”. From being marveled by machines that can merely remember and do simple logical reasoning, we have gotten used to them doing massive amounts of computations at a rate humans cannot hope to keep up with our biological brains, then witnessed theorem provers solving complex symbolic problems, and saw computers perform interesting tasks in blocks worlds. From there, AI developments have fueled knowledge based expert systems, driven DARPA’s cars, beaten humans at chess, Go, Starcraft, engaged in very humanlike conversations on common topics, performed human-like tasks in digital and physical environments, and now it seems like some breakthrough is happening somewhere in AI every week, if not every day.

Similar to how Yuval Harari described stock markets in his book, “Sapiens”, it appears that intelligence is a second-order chaotic system, where mere observation and attempts to predict its behavior changes the behavior of the system itself. Humans and intelligence are like Achilles and the Tortoise in Zeno’s Paradox — whenever we catch up with what we deemed the peak of intelligence in our eyes, it has already moved on to a new point. While this is by no means trying to underplay the advances in AI we have witnessed in the decade leading up to today, I am cautiously optimistic that in another decade or two, we will look back at today’s AI systems and call them (and our current understanding of intelligence) primitive. There is still so much that our current AI systems still struggle to do as well as humans, despite their ability to do a somewhat good enough job much faster and more scalably (sounds familiar?).

Would humanity catch up with the intelligence Tortoise in this millennium, or ever? I do not know, because even the best predictive models rely on good reference data and a strong pattern to make accurate predictions, and as we have seen, AI science is anything but predictable itself. In the meantime, I do hope that in this case humanity get to witness the day of artificial intelligence surpassing the entire human race on what we deem as intelligence, and with it, answer many more curious questions about our world than we have answers today. Maybe at that time, instead of repeatedly realizing “but that’s not intelligent enough” in the pursuit of intelligence, we can instead pursue our true humanness, and instead ask, “Is this human enough? Am I human enough?”

Footnotes

  1. I am allowed to make this joke because I have worked at that meat plant in their QA department, among other positions. 

  2. DeepSeek: Did a little known Chinese startup cause a ‘Sputnik moment’ for AI? NPR News, 2025. 




Enjoy Reading This Article?

Here are some more articles you might like to read next:

  • What do industry researchers do, anyway? Part 2 -- What do they do when they are not publishing
  • What do industry researchers do, anyway? Part 1 -- How to choose a team
  • What do industry researchers do, anyway? Part 0 -- Academia vs Industry
  • What does an area chair actually do, anyway?
  • Teaching Conversational NLP Systems to Ask Informative and Specific Questions