Search

With the Rise of PyTorch TensorFlow’s Deep Learning Dominance May Be Waning

The News: OpenAI recently announced a move to standardize developers’ AI modeling framework on PyTorch, an open source deep learning library. OpenAI, a prominent San Francisco-based non-profit AI research consortium cofounded in 2015 by Elon Musk, former Y-Combinator president Sam Altman and others, also announced that it is joining the PyTorch community as an active contributor. OpenAI is taking these moves so that its developers can more rapidly and efficiently create, iterate, and share GPU-optimized implementations of AI models. Its announcements come after several years in which the consortium implemented projects in diverse AI modeling frameworks based on their relative strengths for various use cases. Nevertheless, the consortium retains the options of using frameworks other than PyTorch when there is a particular technical justification to do so. For the full announcement, read the OpenAI blog.

With the Rise of PyTorch TensorFlow’s Deep Learning Dominance May Be Waning

Analyst Take: PyTorch has become one of the most widely used open source deep learning libraries. In fact, I believe PyTorch is the leading contender to TensorFlow in the hearts and minds of AI developers everywhere. The popularity of PyTorch is grounded in its simplicity, ease of use, and dynamic computational graph and efficient memory usage, and with the rise of PyTorch, TensorFlow’s deep learning dominance may be waning.
PyTorch Dominance is Growing

Developed by Facebook AI Research Lab and released publicly as open source in October 2016, PyTorch is a machine learning library based on Torch, a scientific computing framework and script language that was itself based on the Lua programming language. Currently in version 1.4 and available under a modified BSD license, PyTorch also incorporates Caffe2, a deep learning toolset that was co-developed by Facebook’s lab with researchers at UC-Berkeley researchers.

PyTorch dominance is growing. In its 3+ years on the market, PyTorch has continued to gain adoption and build momentum among developers of AI, deep learning, and machine learning applications. With its latest announcements, OpenAI has given the PyTorch community a significant momentum boost in the battle to overtake TensorFlow as the preferred AI modeling framework for data scientists. OpenAI’s very public standardization on PyTorch takes place against a backdrop of signs that PyTorch is catching up to TensorFlow in both functionality and in adoption by working data scientists.

TensorFlow’s User Base is Strong, but its Early-mover Advantage is Diminishing

TensorFlow’s longstanding segment leadership is grounded in part in its early-mover status, based on the fact that it was open-sourced two years before PyTorch. TensorFlow, now in version 2.0, incorporates a deeper stack of algorithms, capabilities, and tools than PyTorch, though each project has added features that had been associated with the other, thereby leading to greater functional parity. But the overall advantages of each stack are quite nuanced, as is made clear in this detailed technical comparison.

TensorFlow also has considerably more users than PyTorch. This is shown in the Stack Overflow Developer Survey from 2019, in which 10.3 percent of respondents reported using TensorFlow while less than one-third as many, 3.3 percent, reported using PyTorch or its Torch predecessor.

PyTorch is Gaining on TensorFlow in Some Significant Ways

Dominance of TensorFlow aside, according to GitHub’s Octoverse report, PyTorch has been one of the fastest-growing open source projects over the past 12 months, though TensorFlow remains one of the largest open source projects.

According to Facebook, the number of contributors to the PyTorch platform grew more than 50 percent year-over-year to nearly 1,200.

In addition, recent market research shows how rapidly PyTorch is approaching TensorFlow in becoming an essential tool for a wide range of AI, deep learning, and machine learning challenges. Some key metrics in this regard are as follows:

  • Interest: In marketplace interest, PyTorch has risen to near parity with TensorFlow in Google searches between January 2017 and today.
  • Relevance: In job relevance, TensorFlow appeared in three times more job listings in Indeed, Monster, SimplyHired, and LinkedIn as PyTorch in April 2019. However, TensorFlow’s edge in job-listing mentions had dropped to 2x as of last month.
  • Adoption: In developer adoption, PyTorch was used in fewer published research projects than TensorFlow at the NeurIPS 2018 conference, which took place in December of that year. However, PyTorch was used in 166 papers at that conference’s 2019 event, while TensorFlow was used in only 74. In addition, The Gradient found that every major AI conference in 2019 had a majority of papers implemented in PyTorch, while O’Reilly noted that PyTorch citations in papers grew by more than 194 percent in the first half of 2019.

PyTorch Scale Advantages are Tipping the Scales, but Deployability is Still TensorFlow’s Strength

Scalability and efficiency are becoming strong suits for PyTorch in the competitive battle with TensorFlow. OpenAI’s late-January announcement that it is standardizing on PyTorch came two weeks prior to Microsoft Research’s announcement of its open-source DeepSpeed. This new deep learning library can run PyTorch models with changes to only a few lines of code and is available through a lightweight PyTorch API. It uses massively parallelism to train massive 100-billion-parameter deep learning models on NVIDIA GPU clusters considerably faster than older approaches.

These recent announcements focus on PyTorch’s advantages in building highly scalable and efficient deep learning models for running on GPU clusters. This new performance-centric momentum for PyTorch comes amid growing grumbling among developers over weaknesses and limitations in TensorFlow. For example, this recent developer blog takes TensorFlow to task for being confusing, hard to use, bloated, slow, inefficient, and excessively complex. In addition, developers are balking at new TensorFlow abstraction that are hard to port existing code into.

Nevertheless, PyTorch has a tough challenge trying to dislodge TensorFlow from the many production environments into which it has been deployed. Recent research showed that Google’s focus on deployability through APIs such as TensorFlow Serving has driven adoption among professional data scientists who are building mission-critical AI projects for production environments. By contrast, the same research showed that PyTorch is being used more than TensorFlow for data analysis and ad-hoc models within a business context and for Python-based development of intelligent web applications.

Though TensorFlow still has the predominant market share among working data scientists, PyTorch has rapidly come along among key user segments. According to an October 2019 study by The Gradient, PyTorch has become the overwhelming favorite of data scientists in academic and other research positions, whereas TensorFlow continues to have strong adoption by enterprise AI, deep learning, and machine learning developers. PyTorch has built its following on such strengths as seamless integration with the Python ecosystem, a better designed API, and better performance for some ad-hoc analyses.

The Takeaway and Recommendations for Data Scientists

The clear leaders in AI modeling framework are now the Google-developed TensorFlow and the Facebook-developed PyTorch, and they’re pulling away from the rest of the market in usage, share, and momentum.

Though PyTorch has gained momentum in the marketplace, it is not likely to deliver a knockout punch to TensorFlow any time in the foreseeable future. For starters, Google continues to make significant investments in beefing up its TensorFlow platform stack. For example, Google has improved TensorFlow’s API in version 2.0, removing redundant symbols, providing consistent naming conventions, and recommending Keras as the principal high-level API for ease of use. The vendor has introduced by-default eager execution, which enables TensorFlow developers to immediately inspect how their changes to variables and other model components impact model performance. Developers can now create a single model that can then be deployed to browsers, mobile devices, and on servers through add-on frameworks TensorFlow.js and TensorFlow Lite.

Google has also stepped up to the challenge of scaling and speeding TensorFlow performance. With version 2.0, TensorFlow now delivers as much as three times faster training performance using mixed precision on Volta and Turing GPUs with a few lines of code. And Google has promised framework updates in the near future that will integrate an intermediate representation compiler for the purpose of exporting models for easy execution in non-TensorFlow back ends and on a wide range of hardware targets.

Considering the horse race into which Google and Facebook are locked over their respect AI modeling frameworks, it would prudent for professional data scientists to split the difference. Going forward, most AI developers will probably use some blend of TensorFlow and PyTorch in most of their work. Indeed, these will continue to be available in most commercial data science workbenches. And the feature gaps between those frameworks will continue to diminish, as can be seen from the underwhelming nature of their recent feature refreshes.

In a market in which core functions are well defined and users prize feature parity, I predict that TensorFlow will remain a core enterprise AI development tool. But what remains to be seen is whether Facebook will continue to invest in PyTorch in order to keep it at least at functional parity with TensorFlow and I’ll continue to watch the PyTorch growth trajectory with interest.

Bottom line, while TensorFlow’s deep learning dominance may be waning with some serious competition from PyTorch, one key fact remains: Unlike Facebook, Google’s core business model depends on furnishing a diverse partner ecosystem with strong AI capabilities. Google has significantly more to lose if its AI modeling tool fails to sustain its momentum among enterprise developers.

Futurum Research provides industry research and analysis. These columns are for educational purposes only and should not be considered in any way investment advice.

Latest insights from the Futurum Research team:

Apple Rumored to be Moving Mac Processors to ARM by 2021? 

Ericsson Enlists AI to Galvanize Energy Infrastructure Operations

Ribbon’s New CEO is the Right Fit to Expand Sales Capabilities and Pilot ECI Integration

Image Credit: FB Engineering

Author Information

James has held analyst and consulting positions at SiliconANGLE/Wikibon, Forrester Research, Current Analysis and the Burton Group. He is an industry veteran, having held marketing and product management positions at IBM, Exostar, and LCC. He is a widely published business technology author, has published several books on enterprise technology, and contributes regularly to InformationWeek, InfoWorld, Datanami, Dataversity, and other publications.

SHARE:

Latest Insights:

The Futurum Group’s Dr. Bob Sutor looks at five generative AI Python code generators to see how well they follow instructions and whether their outputs check for errors and are functionally complete.
Cerebras CS-3 Powered by 3rd Gen WSE-3 Delivers Breakthrough AI Supercomputer Capabilities Matching Up Very Favorably Against the NVIDIA Blackwell Platform
The Futurum Group’s Ron Westfall assesses why the Cerebras CS-3, powered by the WSE-3, can be viewed as the fastest AI chip across the entire AI ecosystem including the NVIDIA Blackwell platform.
Rubrik Files an S-1 with the US SEC for Initial Public Offering
Krista Macomber, Research Director at The Futurum Group, shares her insights on Rubrik’s S-1 filing with the United States Security and Exchange Commission (SEC) to go public.
The Futurum Group’s Steven Dickens provides his take on Equinix's latest announcement as Equinix pivots to a digital infrastructure leader, investing in AI-ready data centers to meet growing technological demands with a new facility in California.