Why TensorFlow for Python is dying a slow death
Religious wars have been a cornerstone in technology. Whether it’s debating the pros and cons of different operating systems, cloud providers, or deep learning frameworks – a few beers in, the facts slide aside and people start fighting for their technology like it’s the holy grail.
Just think of the endless talk about IDEs. Some people prefer VisualStudio, others use IntelliJ, still others use plain old editors like Vim. There is a never ending discussionhalf ironically, of course, about what your favorite word processor might say about your personality.
Similar wars seem to flare up around PyTorch and TensorFlow. Both camps have masses of supporters. And both camps have good arguments to suggest why their preferred deep learning framework is may be the best.
Buy your tickets for TNW Valencia in March!
The heart of technology comes to the heart of the Mediterranean
That said, the data speaks a pretty simple truth. TensorFlow is the most widespread deep learning framework as of now. It gets almost twice as many questions about StackOverflow every month as PyTorch.
On the other hand, TensorFlow has not been growing since about 2018. PyTorch has been steadily gaining traction until the day this post was published.
For the sake of completeness I have also included Keras in the figure below. It was released around the same time as TensorFlow. But as you can see, it has tanked in recent years. The short explanation for this is that Keras is a bit simplistic and too slow for the requirements most deep learning practitioners have.

StackOverflow traffic for TensorFlow may not be declining anytime soon, but it is declining nonetheless. And there are reasons to believe that this decline will intensify in the coming years, especially in the world of Python.
PyTorch feels more pythonic
Developed by Google, TensorFlow was perhaps one of the first frameworks to hit the deep learning party in late 2015. However, the first version was rather cumbersome to use – as many first versions of software tend to be.
That’s why Meta started developing PyTorch as a means to offer almost the same functionality as TensorFlow, but to make it more user-friendly.
The folks behind TensorFlow quickly noticed this and adopted many of PyTorch’s most popular features in TensorFlow 2.0.
A good rule of thumb is that you can do everything PyTorch does in TensorFlow. It just takes you twice as much effort to write the code. It’s not that intuitive and feels pretty unpythonic even today.
PyTorch, on the other hand, feels very natural to use if you enjoy using Python.
PyTorch has more models available
Many companies and academic institutions don’t have the massive computing power needed to build large models. However, size is king when it comes to machine learning; the larger the model, the more impressive the performance.
Of Hugging face, engineers can take large, trained, and tuned models and incorporate them into their pipelines with just a few lines of code. No fewer than 85% of these models can only to be used with PyTorch. Only about 8% of HuggingFace models are exclusive to TensorFlow. The rest is available for both frames.
This means that if you plan on using large models, you’re better off staying away from TensorFlow or investing heavily in compute resources to train your own model.
PyTorch is better for students and research
PyTorch has a reputation for being more appreciated by academia. This is not unjustified; three of the four research papers use PyTorch. Even among the researchers who started using TensorFlow – remember it arrived earlier in the deep learning party – the majority have now migrated to PyTorch.
These trends are staggering and persist despite the fact that Google has quite a large footprint in AI research and primarily uses TensorFlow.
What is perhaps even more striking about this is that research influences education and therefore determines what students can learn. A professor who has published most of his papers using PyTorch will be more inclined to use it in lectures. Not only do they feel more comfortable teaching and answering questions about PyTorch; they may also have stronger beliefs about its success.
Therefore, students may gain much more insight in PyTorch than in TensorFlow. And given that today’s students are tomorrow’s workers, you can probably guess where this trend is headed…
The PyTorch ecosystem has grown faster
Ultimately, software frameworks only matter insofar as they are players in an ecosystem. Both PyTorch and TensorFlow have quite developed ecosystems, including repositories for trained models other than HuggingFace, data management systems, error prevention mechanisms, and more.
It’s worth mentioning that TensorFlow is now a slightly more developed ecosystem than PyTorch. Keep in mind, though, that PyTorch has joined the party later and has seen quite a bit of user growth over the past few years. Therefore, one can expect the PyTorch ecosystem to outgrow that of TensorFlow in due course.
TensorFlow has the better deployment infrastructure
As cumbersome as TensorFlow is to code, once written it’s a lot easier to implement than PyTorch. Tools like TensorFlow Serving and TensorFlow Lite make deployment to the cloud, servers, mobile devices, and IoT devices a breeze.
PyTorch, on the other hand, has been notoriously slow with releasing deployment tools. That said, it has been closing the gap with TensorFlow quite quickly lately.
It’s hard to predict right now, but it’s entirely possible that PyTorch will match or even outgrow TensorFlow’s deployment infrastructure in the coming years.
TensorFlow code is likely to stick around for a while because it is costly to switch frameworks after deployment. However, it is quite conceivable that newer deep learning applications will increasingly be written and deployed with PyTorch.
TensorFlow is not all about Python
TensorFlow is not dead. It’s just not as popular as it once was.
The main reason for this is that many people who use Python for machine learning are switching to PyTorch.
But Python is not the only language available for machine learning. It’s the OG of machine learning, which is the only reason TensorFlow’s developers focused their support around Python.
These days, one can use TensorFlow with JavaScript, Java and C++. The community is also beginning to develop support for other languages, such as Julia, Rust, Scala, and Haskell, among others.
PyTorch, on the other hand, is very Python-centered – that’s why it feels so Pythonic after all. There is a C++ APIbut there isn’t half the support for other languages that TensorFlow offers.
It is quite conceivable that PyTorch will overtake TensorFlow within Python. On the other hand, TensorFlow, with its impressive ecosystem, implementation features and support for other languages, will continue to be a major player in deep learning.
Whether you choose TensorFlow or PyTorch for your next project mainly depends on how much you like Python.
This article was written by Ari Joury and was originally published on Medium. You can read it here.
Contents